back to index

Joscha Bach: Nature of Reality, Dreams, and Consciousness | Lex Fridman Podcast #212


small model | large model

link |
00:00:00.000
The following is a conversation with Yosha Bach,
link |
00:00:02.720
his second time on the podcast.
link |
00:00:04.940
Yosha is one of the most fascinating minds in the world,
link |
00:00:08.540
exploring the nature of intelligence,
link |
00:00:10.620
cognition, computation, and consciousness.
link |
00:00:14.500
To support this podcast, please check out our sponsors,
link |
00:00:17.700
Coinbase, Codecademy, Linode, NetSuite, and ExpressVPN.
link |
00:00:23.940
Their links are in the description.
link |
00:00:26.740
This is the Lex Friedman podcast,
link |
00:00:28.980
and here is my conversation with Yosha Bach.
link |
00:00:33.340
Thank you for once again coming on
link |
00:00:35.180
to this particular Russian program
link |
00:00:38.220
and sticking to the theme of a Russian program.
link |
00:00:40.740
Let's start with the darkest of topics.
link |
00:00:43.100
Kriviyat.
link |
00:00:45.220
So this is inspired by one of your tweets.
link |
00:00:48.380
You wrote that, quote,
link |
00:00:50.900
when life feels unbearable,
link |
00:00:53.740
I remind myself that I'm not a person.
link |
00:00:56.620
I am a piece of software running on the brain
link |
00:00:58.940
of a random ape for a few decades.
link |
00:01:01.500
It's not the worst brain to run on.
link |
00:01:04.540
Have you experienced low points in your life?
link |
00:01:07.740
Have you experienced depression?
link |
00:01:09.780
Of course, we all experience low points in our life,
link |
00:01:12.140
and we get appalled by the things,
link |
00:01:15.340
by the ugliness of stuff around us.
link |
00:01:17.060
We might get desperate about our lack of self regulation,
link |
00:01:21.300
and sometimes life is hard,
link |
00:01:24.580
and I suspect you don't get to your life,
link |
00:01:27.900
nobody does, to get through their life without low points
link |
00:01:30.700
and without moments where they're despairing.
link |
00:01:33.740
And I thought that let's capture this state
link |
00:01:37.800
and how to deal with that state.
link |
00:01:40.140
And I found that very often you realize
link |
00:01:43.120
that when you stop taking things personally,
link |
00:01:44.860
when you realize that this notion of a person is a fiction,
link |
00:01:48.980
similar as it is in Westworld,
link |
00:01:50.700
where the robots realize that their memories and desires
link |
00:01:53.300
are the stuff that keeps them in the loop,
link |
00:01:55.840
and they don't have to act on those memories and desires,
link |
00:01:59.100
that our memories and expectations is what make us unhappy.
link |
00:02:02.540
And the present really does.
link |
00:02:04.200
The day in which we are, for the most part, it's okay, right?
link |
00:02:08.300
When we are sitting here, right here, right now,
link |
00:02:11.260
we can choose how we feel.
link |
00:02:13.100
And the thing that affects us is the expectation
link |
00:02:16.740
that something is going to be different
link |
00:02:18.760
from what we want it to be,
link |
00:02:19.920
or the memory that something was different
link |
00:02:21.860
from what you wanted it to be.
link |
00:02:24.140
And once we basically zoom out from all this,
link |
00:02:27.340
what's left is not a person.
link |
00:02:28.980
What's left is this state of being conscious,
link |
00:02:32.300
which is a software state.
link |
00:02:33.620
And software doesn't have an identity.
link |
00:02:35.680
It's a physical law.
link |
00:02:37.820
And it's a law that acts in all of us,
link |
00:02:39.820
and it's embedded in a suitable substrate.
link |
00:02:42.300
And we didn't pick that substrate, right?
link |
00:02:43.780
We are mostly randomly instantiated on it.
link |
00:02:46.940
And they're all these individuals,
link |
00:02:48.900
and everybody has to be one of them.
link |
00:02:51.740
And eventually you're stuck on one of them,
link |
00:02:54.220
and have to deal with that.
link |
00:02:56.340
So you're like a leaf floating down the river.
link |
00:02:59.080
You just have to accept that there's a river,
link |
00:03:01.340
and you just float wherever it takes you.
link |
00:03:03.820
You don't have to do this.
link |
00:03:04.660
The thing is that the illusion that you are an agent
link |
00:03:08.140
is a construct.
link |
00:03:09.500
What part of that is actually under your control?
link |
00:03:13.100
And I think that our consciousness
link |
00:03:15.260
is largely a control model for our own attention.
link |
00:03:18.460
So we notice where we are looking,
link |
00:03:21.160
and we can influence what we're looking,
link |
00:03:22.740
how we are disambiguating things,
link |
00:03:24.180
how we put things together in our mind.
link |
00:03:26.580
And the whole system that runs us
link |
00:03:28.780
is this big cybernetic motivational system.
link |
00:03:30.940
So we're basically like a little monkey
link |
00:03:32.940
sitting on top of an elephant,
link |
00:03:34.940
and we can put this elephant here and there
link |
00:03:37.540
to go this way or that way.
link |
00:03:39.360
And we might have the illusion that we are the elephant,
link |
00:03:42.020
or that we are telling it what to do.
link |
00:03:43.460
And sometimes we notice that it walks
link |
00:03:45.620
into a completely different direction.
link |
00:03:47.460
And we didn't set this thing up.
link |
00:03:49.000
It just is the situation that we find ourselves in.
link |
00:03:52.620
How much prodding can we actually do of the elephant?
link |
00:03:56.420
A lot.
link |
00:03:57.380
But I think that our consciousness
link |
00:04:00.660
cannot create the motive force.
link |
00:04:03.000
Is the elephant consciousness in this metaphor?
link |
00:04:05.340
No, the monkey is the consciousness.
link |
00:04:07.940
The monkey is the attentional system
link |
00:04:09.340
that is observing things.
link |
00:04:10.460
There is a large perceptual system
link |
00:04:12.380
combined with a motivational system
link |
00:04:14.300
that is actually providing the interface to everything
link |
00:04:17.260
and our own consciousness.
link |
00:04:18.660
I think is the tool that directs the attention
link |
00:04:21.820
of that system, which means it singles out features
link |
00:04:24.740
and performs conditional operations
link |
00:04:26.980
for which it needs an index memory.
link |
00:04:28.900
But this index memory is what we perceive
link |
00:04:31.420
as our stream of consciousness.
link |
00:04:32.740
But the consciousness is not in charge.
link |
00:04:34.900
That's an illusion.
link |
00:04:35.860
So everything outside of that consciousness
link |
00:04:40.300
is the elephant.
link |
00:04:41.380
So it's the physics of the universe,
link |
00:04:43.060
but it's also society that's outside of your...
link |
00:04:46.140
I would say the elephant is the agent.
link |
00:04:48.300
So there is an environment to which the agent is stomping
link |
00:04:51.300
and you are influencing a little part of that agent.
link |
00:04:55.100
So is the agent a single human being?
link |
00:04:58.980
Which object has agency?
link |
00:05:02.340
That's an interesting question.
link |
00:05:03.820
I think a way to think about an agent
link |
00:05:06.140
is that it's a controller with a set point generator.
link |
00:05:10.500
The notion of a controller comes from cybernetics
link |
00:05:13.020
and control theory.
link |
00:05:14.380
Control system consists out of a system
link |
00:05:17.780
that is regulating some value
link |
00:05:20.860
and the deviation of that value from a set point.
link |
00:05:23.980
And it has a sensor that measures the system's deviation
link |
00:05:27.500
from that set point and an effector
link |
00:05:30.060
that can be parametrized by the controller.
link |
00:05:32.580
So the controller tells the effector to do a certain thing.
link |
00:05:35.500
And the goal is to reduce the distance
link |
00:05:38.460
between the set point and the current value of the system.
link |
00:05:40.860
And there's an environment
link |
00:05:41.700
which disturbs the regulated system,
link |
00:05:43.580
which brings it away from that set point.
link |
00:05:45.580
So simplest case is a thermostat.
link |
00:05:47.860
The thermostat is really simple
link |
00:05:49.100
because it doesn't have a model.
link |
00:05:50.260
The thermostat is only trying to minimize
link |
00:05:52.380
the set point deviation in the next moment.
link |
00:05:55.780
And if you want to minimize the set point deviation
link |
00:05:58.740
over a longer time span, you need to integrate it.
link |
00:06:00.860
You need to model what is going to happen.
link |
00:06:03.660
So for instance, when you think about
link |
00:06:05.700
that your set point is to be comfortable in life,
link |
00:06:08.220
maybe you need to make yourself uncomfortable first, right?
link |
00:06:11.420
So you need to make a model of what's going to happen when.
link |
00:06:14.060
And this is task of the controller is to use its sensors
link |
00:06:18.020
to measure the state of the environment
link |
00:06:20.540
and the system that is being regulated
link |
00:06:22.900
and figure out what to do.
link |
00:06:24.900
And if the task is complex enough,
link |
00:06:27.620
the set points are complicated enough.
link |
00:06:30.100
And if the controller has enough capacity
link |
00:06:32.540
and enough sensor feedback,
link |
00:06:34.940
then the task of the controller is to make a model
link |
00:06:37.340
of the entire universe that it's in,
link |
00:06:39.180
the conditions under which it exists and of itself.
link |
00:06:42.300
And this is a very complex agent.
link |
00:06:43.940
And we are in that category.
link |
00:06:45.820
And an agent is not necessarily a thing in the universe.
link |
00:06:49.460
It's a class of models that we use
link |
00:06:51.700
to interpret aspects of the universe.
link |
00:06:54.580
And when we notice the environment around us,
link |
00:06:57.820
a lot of things only make sense
link |
00:06:59.460
at the level that should be entangled with them
link |
00:07:01.060
if we interpret them as control systems
link |
00:07:03.340
that make models of the world
link |
00:07:04.700
and try to minimize their own set points.
link |
00:07:07.380
So the models are the agents.
link |
00:07:10.580
The agent is a class of model.
link |
00:07:12.580
And we notice that we are an agent ourselves.
link |
00:07:14.660
We are the agent that is using our own control model
link |
00:07:17.780
to perform actions.
link |
00:07:18.820
We notice we produce a change in the model
link |
00:07:22.100
and things in the world change.
link |
00:07:23.460
And this is how we discover the idea that we have a body,
link |
00:07:26.740
that we are situated environment,
link |
00:07:28.260
and that we have a first person perspective.
link |
00:07:31.140
Still don't understand what's the best way to think
link |
00:07:34.980
of which object has agency with respect to human beings.
link |
00:07:39.820
Is it the body?
link |
00:07:41.540
Is it the brain?
link |
00:07:43.420
Is it the contents of the brain as agency?
link |
00:07:46.020
Like what's the actuators that you're referring to?
link |
00:07:49.020
What is the controller and where does it reside?
link |
00:07:52.100
Or is it these impossible things?
link |
00:07:54.100
Because I keep trying to ground it to space time,
link |
00:07:57.740
the three dimension of space and the one dimension of time.
link |
00:08:01.580
What's the agent in that for humans?
link |
00:08:04.580
There is not just one.
link |
00:08:06.020
It depends on the way in which you're looking
link |
00:08:08.260
at this thing in which you're framing it.
link |
00:08:10.060
Imagine that you are, say Angela Merkel,
link |
00:08:13.540
and you are acting on behalf of Germany.
link |
00:08:16.660
Then you could say that Germany is the agent.
link |
00:08:19.820
And in the mind of Angela Merkel,
link |
00:08:21.580
she is Germany to some extent,
link |
00:08:23.540
because in the way in which she acts,
link |
00:08:25.660
the destiny of Germany changes.
link |
00:08:28.060
There are things that she can change
link |
00:08:29.700
that basically affect the behavior of that nation state.
link |
00:08:33.580
Okay, so it's hierarchies of,
link |
00:08:35.060
to go to another one of your tweets
link |
00:08:37.460
with I think you were playfully mocking Jeff Hawkins
link |
00:08:42.380
with saying his brain's all the way down.
link |
00:08:45.820
So it's like, it's agents all the way down.
link |
00:08:49.020
It's agents made up of agents, made up of agents.
link |
00:08:51.780
Like if Angela Merkel's Germany
link |
00:08:54.700
and Germany's made up a bunch of people
link |
00:08:56.540
and the people are themselves agents
link |
00:08:58.860
in some kind of context.
link |
00:09:01.060
And then people are made up of cells, each individual.
link |
00:09:04.900
So is it agents all the way down?
link |
00:09:07.220
I suspect that has to be like this
link |
00:09:08.900
in a world where things are self organizing.
link |
00:09:12.900
Most of the complexity that we are looking at,
link |
00:09:15.660
everything in life is about self organization.
link |
00:09:18.540
So I think up from the level of life, you have agents.
link |
00:09:24.220
And below life, you rarely have agents
link |
00:09:27.460
because sometimes you have control systems
link |
00:09:30.060
that emerge randomly in nature
link |
00:09:31.580
and try to achieve a set point,
link |
00:09:33.740
but they're not that interesting agents that make models.
link |
00:09:36.660
And because to make an interesting model of the world,
link |
00:09:39.580
you typically need a system that is true and complete.
link |
00:09:42.380
Can I ask you a personal question?
link |
00:09:46.140
What's the line between life and non life?
link |
00:09:48.740
It's personal because you're a life form.
link |
00:09:52.300
So what do you think in this emerging complexity,
link |
00:09:55.740
at which point does the things that are being living
link |
00:09:57.980
and have agency?
link |
00:10:00.220
Personally, I think that the simplest answer
link |
00:10:01.900
that is that life is cells because...
link |
00:10:04.540
Life is what?
link |
00:10:05.380
Cells.
link |
00:10:06.220
Biological cells.
link |
00:10:07.060
Biological cells.
link |
00:10:07.900
So it's a particular kind of principle
link |
00:10:09.420
that we have discovered to exist in nature.
link |
00:10:11.860
It's modular stuff that consists
link |
00:10:14.540
out of basically this DNA tape
link |
00:10:17.820
with a read write head on top of it,
link |
00:10:20.180
that is able to perform arbitrary computations
link |
00:10:23.220
and state transitions within the cell.
link |
00:10:25.300
And it's combined with a membrane
link |
00:10:27.380
that insulates the cell from its environment.
link |
00:10:30.540
And there are chemical reactions inside of the cell
link |
00:10:34.780
that are in disequilibrium.
link |
00:10:36.300
And the cell is running in such a way
link |
00:10:38.740
that this disequilibrium doesn't disappear.
link |
00:10:41.460
And the cell goes into an equilibrium state, it dies.
link |
00:10:46.260
And it requires something like an neck entropy extractor
link |
00:10:50.100
to maintain this disequilibrium.
link |
00:10:51.900
So it's able to harvest like entropy from its environment
link |
00:10:55.700
and keep itself running.
link |
00:10:57.820
Yeah, so there's information and there's a wall
link |
00:11:00.540
to maintain this disequilibrium.
link |
00:11:04.100
But isn't this very earth centric?
link |
00:11:06.660
Like what you're referring to as a...
link |
00:11:08.740
I'm not making a normative notion.
link |
00:11:10.660
You could say that there are probably other things
link |
00:11:13.100
in the universe that are cell like and life like,
link |
00:11:16.380
and you could also call them life,
link |
00:11:17.860
but eventually it's just a willingness
link |
00:11:21.220
to find an agreement of how to use the terms.
link |
00:11:23.820
I like cells because it's completely coextential
link |
00:11:26.580
with the way that we use the word
link |
00:11:28.500
even before we knew about cells.
link |
00:11:30.340
So people were pointing at some stuff
link |
00:11:32.380
and saying, this is somehow animate.
link |
00:11:34.260
And this is very different from the non animate stuff.
link |
00:11:36.620
And what's the difference between the living
link |
00:11:38.900
and the dead stuff.
link |
00:11:40.140
And it's mostly whether the cells are working or not.
link |
00:11:42.860
And also this boundary of life,
link |
00:11:45.340
where we say that for instance, the virus
link |
00:11:46.820
is basically an information packet
link |
00:11:48.820
that is subverting the cell and not life by itself.
link |
00:11:52.500
That makes sense to me.
link |
00:11:54.100
And it's somewhat arbitrary.
link |
00:11:55.860
You could of course say that systems
link |
00:11:57.940
that permanently maintain a disequilibrium
link |
00:12:00.140
and can self replicate are always life.
link |
00:12:03.340
And maybe that's a useful definition too,
link |
00:12:06.460
but this is eventually just how you want to use the word.
link |
00:12:10.420
Is it so useful for conversation,
link |
00:12:12.620
but is it somehow fundamental to the universe?
link |
00:12:17.340
Do you think there's a actual line
link |
00:12:19.300
to eventually be drawn between life and non life?
link |
00:12:21.860
Or is it all a kind of continuum?
link |
00:12:24.300
I don't think it's a continuum,
link |
00:12:25.460
but there's nothing magical that is happening.
link |
00:12:28.140
Living systems are a certain type of machine.
link |
00:12:31.140
What about non living systems?
link |
00:12:32.980
Is it also a machine?
link |
00:12:34.300
There are non living machines,
link |
00:12:35.980
but the question is at which point is a system
link |
00:12:38.540
able to perform arbitrary state transitions
link |
00:12:43.100
to make representations.
link |
00:12:44.940
And living things can do this.
link |
00:12:46.820
And of course we can also build non living things
link |
00:12:48.980
that can do this, but we don't know anything in nature
link |
00:12:52.220
that is not a cell and is not created by still alive
link |
00:12:56.180
that is able to do that.
link |
00:12:58.580
Not only do we not know,
link |
00:13:02.860
I don't think we have the tools to see otherwise.
link |
00:13:05.980
I always worry that we look at the world too narrowly.
link |
00:13:11.140
Like there could be life of a very different kind
link |
00:13:14.900
right under our noses that we're just not seeing
link |
00:13:18.860
because we're not either limitations
link |
00:13:21.700
of our cognitive capacity,
link |
00:13:23.260
or we're just not open minded enough
link |
00:13:26.980
either with the tools of science
link |
00:13:28.380
or just the tools of our mind.
link |
00:13:32.020
Yeah, that's possible.
link |
00:13:33.020
I find this thought very fascinating.
link |
00:13:35.060
And I suspect that many of us ask ourselves since childhood,
link |
00:13:39.020
what are the things that we are missing?
link |
00:13:40.580
What kind of systems and interconnections exist
link |
00:13:43.660
that are outside of our gaze?
link |
00:13:47.900
But we are looking for it
link |
00:13:51.140
and physics doesn't have much room at the moment
link |
00:13:55.140
for opening up something that would not violate
link |
00:13:59.580
the conservation of information as we know it.
link |
00:14:03.300
Yeah, but I wonder about time scale and scale,
link |
00:14:06.860
spatial scale, whether we just need to open up our idea
link |
00:14:11.860
of what, like how life presents itself.
link |
00:14:15.300
It could be operating in a much slower time scale,
link |
00:14:17.860
a much faster time scale.
link |
00:14:20.060
And it's almost sad to think that there's all this life
link |
00:14:23.940
around us that we're not seeing
link |
00:14:25.340
because we're just not like thinking
link |
00:14:29.220
in terms of the right scale, both time and space.
link |
00:14:34.380
What is your definition of life?
link |
00:14:36.060
What do you understand as life?
link |
00:14:40.780
Entities of sufficiently high complexity
link |
00:14:44.500
that are full of surprises.
link |
00:14:46.140
I don't know, I don't have a free will.
link |
00:14:53.980
So that just came out of my mouth.
link |
00:14:55.620
I'm not sure that even makes sense.
link |
00:14:57.260
There's certain characteristics.
link |
00:14:59.180
So complexity seems to be a necessary property of life.
link |
00:15:04.140
And I almost want to say it has ability
link |
00:15:09.980
to do something unexpected.
link |
00:15:13.340
It seems to me that life is the main source
link |
00:15:15.460
of complexity on earth.
link |
00:15:18.660
Yes.
link |
00:15:19.500
And complexity is basically a bridgehead
link |
00:15:22.060
that order builds into chaos by modeling,
link |
00:15:27.220
by processing information in such a way
link |
00:15:29.020
that you can perform reactions
link |
00:15:31.220
that would not be possible for dump systems.
link |
00:15:33.780
And this means that you can harvest neck entropy
link |
00:15:36.020
that dump systems cannot harvest.
link |
00:15:37.780
And this is what complexity is mostly about.
link |
00:15:40.140
In some sense, the purpose of life is to create complexity.
link |
00:15:45.060
Yeah.
link |
00:15:46.020
Increasing.
link |
00:15:46.860
I mean, there seems to be some kind of universal drive
link |
00:15:52.340
towards increasing pockets of complexity.
link |
00:15:56.180
I don't know what that is.
link |
00:15:57.580
That seems to be like a fundamental,
link |
00:16:00.020
I don't know if it's a property of the universe
link |
00:16:02.340
or it's just a consequence of the way the universe works,
link |
00:16:05.860
but there seems to be this small pockets
link |
00:16:08.700
of emergent complexity that builds on top of each other
link |
00:16:11.380
and starts having like greater and greater complexity
link |
00:16:15.380
by having like a hierarchy of complexity.
link |
00:16:17.980
Little organisms building up a little society
link |
00:16:20.700
that then operates almost as an individual organism itself.
link |
00:16:24.060
And all of a sudden you have Germany and Merkel.
link |
00:16:27.660
Well, that's not obvious to me.
link |
00:16:28.860
Everything that goes up has to come down at some point.
link |
00:16:32.500
So if you see this big exponential curve somewhere,
link |
00:16:36.500
it's usually the beginning of an S curve
link |
00:16:39.380
where something eventually reaches saturation.
link |
00:16:41.420
And the S curve is the beginning of some kind of bump
link |
00:16:43.820
that goes down again.
link |
00:16:45.500
And there is just this thing that when you are
link |
00:16:49.180
in sight of an evolution of life,
link |
00:16:53.220
you are on top of a puddle of negentropy
link |
00:16:55.820
that is being sucked dry by life.
link |
00:16:58.900
And during that happening,
link |
00:17:00.660
you see an increase in complexity
link |
00:17:02.940
because life forms are competing with each other
link |
00:17:04.780
to get more and more finer and finer corner
link |
00:17:09.340
of that negentropy extraction.
link |
00:17:11.620
I feel like that's a gradual beautiful process
link |
00:17:13.900
like that's almost follows a process akin to evolution.
link |
00:17:18.100
And the way it comes down is not the same way it came up.
link |
00:17:22.900
The way it comes down is usually harshly and quickly.
link |
00:17:27.380
So usually there's some kind of catastrophic event.
link |
00:17:30.620
The Roman Empire took a long time.
link |
00:17:32.380
But would that be,
link |
00:17:36.340
would you classify this as a decrease in complexity though?
link |
00:17:39.420
Yes.
link |
00:17:40.260
I think that this size of the cities that could be fed
link |
00:17:42.900
has decreased dramatically.
link |
00:17:44.740
And you could see that the quality of the art decreased
link |
00:17:47.820
and it did so gradually.
link |
00:17:49.940
And maybe future generations,
link |
00:17:53.260
when they look at the history of the United States
link |
00:17:55.660
in the 21st century,
link |
00:17:57.380
will also talk about the gradual decline,
link |
00:17:59.140
not something that suddenly happens.
link |
00:18:05.620
Do you have a sense of where we are?
link |
00:18:07.700
Are we on the exponential rise?
link |
00:18:09.740
Are we at the peak?
link |
00:18:11.260
Or are we at the downslope of the United States empire?
link |
00:18:15.740
It's very hard to say from a single human perspective,
link |
00:18:18.460
but it seems to me that we are probably at the peak.
link |
00:18:25.380
I think that's probably the definition of like optimism
link |
00:18:28.100
and cynicism.
link |
00:18:29.620
So my nature of optimism is,
link |
00:18:31.540
I think we're on the rise.
link |
00:18:36.940
I think this is just all a matter of perspective.
link |
00:18:39.300
Nobody knows,
link |
00:18:40.140
but I do think that erring on the side of optimism,
link |
00:18:43.260
like you need a sufficient number,
link |
00:18:45.420
you need a minimum number of optimists
link |
00:18:47.460
in order to make that up thing actually work.
link |
00:18:50.980
And so I tend to be on the side of the optimists.
link |
00:18:53.620
I think that we are basically a species of grasshoppers
link |
00:18:56.540
that have turned into locusts.
link |
00:18:58.620
And when you are in that locust mode,
link |
00:19:00.740
you see an amazing rise of population numbers
link |
00:19:04.100
and of the complexity of the interactions
link |
00:19:07.020
between the individuals.
link |
00:19:08.780
But it's ultimately the question is, is it sustainable?
link |
00:19:12.860
See, I think we're a bunch of lions and tigers
link |
00:19:16.140
that have become domesticated cats,
link |
00:19:20.220
to use a different metaphor.
link |
00:19:21.420
As I'm not exactly sure we're so destructive,
link |
00:19:24.300
we're just softer and nicer and lazier.
link |
00:19:27.820
But I think we have monkeys and not the cats.
link |
00:19:29.900
And if you look at the monkeys, they are very busy.
link |
00:19:33.620
The ones that have a lot of sex, those monkeys?
link |
00:19:35.820
Not just the bonobos.
link |
00:19:37.180
I think that all the monkeys are basically
link |
00:19:38.940
a discontent species that always needs to meddle.
link |
00:19:42.700
Well, the gorillas seem to have
link |
00:19:44.180
a little bit more of a structure,
link |
00:19:45.900
but it's a different part of the tree.
link |
00:19:50.620
Okay, you mentioned the elephant
link |
00:19:52.900
and the monkey riding the elephant.
link |
00:19:55.660
And consciousness is the monkey.
link |
00:20:00.300
And there's some prodding that the monkey gets to do.
link |
00:20:03.180
And sometimes the elephant listens.
link |
00:20:06.180
I heard you got into some contentious,
link |
00:20:08.940
maybe you can correct me,
link |
00:20:09.820
but I heard you got into some contentious
link |
00:20:11.540
free will discussions.
link |
00:20:13.900
Is this with Sam Harris or something like that?
link |
00:20:16.100
Not that I know of.
link |
00:20:18.700
Some people on Clubhouse told me
link |
00:20:20.460
you made a bunch of big debate points about free will.
link |
00:20:25.940
Well, let me just then ask you where,
link |
00:20:28.860
in terms of the monkey and the elephant,
link |
00:20:31.700
do you think we land in terms of the illusion of free will?
link |
00:20:35.300
How much control does the monkey have?
link |
00:20:38.580
We have to think about what the free will is
link |
00:20:41.460
in the first place.
link |
00:20:43.260
We are not the machine.
link |
00:20:44.420
We are not the thing that is making the decisions.
link |
00:20:46.820
We are a model of that decision making process.
link |
00:20:49.900
And there is a difference between making your own decisions
link |
00:20:54.220
and predicting your own decisions.
link |
00:20:56.180
And that difference is the first person perspective.
link |
00:20:59.860
And what basically makes decision making
link |
00:21:04.820
and the conditions of free will distinct
link |
00:21:06.620
from just automatically doing the best thing is
link |
00:21:10.340
that we often don't know what the best thing is.
link |
00:21:13.260
We make decisions under uncertainty.
link |
00:21:15.540
We make informed bets using a betting algorithm
link |
00:21:17.900
that we don't yet understand
link |
00:21:19.140
because we haven't reverse engineered
link |
00:21:20.900
our own minds sufficiently.
link |
00:21:22.340
We don't know the expected rewards.
link |
00:21:23.900
We don't know the mechanism
link |
00:21:24.940
by which we estimate the rewards and so on.
link |
00:21:27.180
But there is an algorithm.
link |
00:21:28.300
We observe ourselves performing
link |
00:21:30.580
where we see that we weight facts and factors
link |
00:21:34.820
and the future, and then some kind of possibility,
link |
00:21:39.300
some motive gets raised to an intention.
link |
00:21:41.620
And that's informed bet that the system is making.
link |
00:21:44.500
And that making of the informed bet,
link |
00:21:46.420
the representation of that is what we call free will.
link |
00:21:49.460
And it seems to be paradoxical
link |
00:21:51.580
because we think that the crucial thing is
link |
00:21:53.700
about it that it's somehow indeterministic.
link |
00:21:56.500
And yet if it was indeterministic, it would be random.
link |
00:22:00.340
And it cannot be random because if it was random,
link |
00:22:03.380
if just dice were being thrown in the universe,
link |
00:22:05.300
randomly forces you to do things, it would be meaningless.
link |
00:22:08.180
So the important part of the decisions
link |
00:22:10.420
is always the deterministic stuff.
link |
00:22:12.700
But it appears to be indeterministic to you
link |
00:22:15.220
because it's unpredictable.
link |
00:22:16.820
Because if it was predictable,
link |
00:22:18.500
you wouldn't experience it as a free will decision.
link |
00:22:21.460
You would experience it as just doing
link |
00:22:23.260
the necessary right thing.
link |
00:22:25.580
And you see this continuum between the free will
link |
00:22:28.740
and the execution of automatic behavior
link |
00:22:31.740
when you're observing other people.
link |
00:22:33.220
So for instance, when you are observing your own children,
link |
00:22:36.220
if you don't understand them,
link |
00:22:37.580
you will abuse this agent model
link |
00:22:40.060
where you have an agent with a set point generator.
link |
00:22:43.260
And the agent is doing the best it can
link |
00:22:45.420
to minimize the difference to the set point.
link |
00:22:47.420
And it might be confused and sometimes impulsive or whatever,
link |
00:22:51.220
but it's acting on its own free will.
link |
00:22:53.340
And when you understand what's happens
link |
00:22:55.420
in the mind of the child, you see that it's automatic.
link |
00:22:58.540
And you can outmodel the child,
link |
00:23:00.300
you can build things around the child
link |
00:23:02.300
that will lead the child to making exactly the decision
link |
00:23:05.260
that you are predicting.
link |
00:23:06.740
And under these circumstances,
link |
00:23:08.740
like when you are a stage musician
link |
00:23:10.500
or somebody who is dealing with people
link |
00:23:13.420
that you sell a car to,
link |
00:23:15.260
and you completely understand the psychology
link |
00:23:17.300
and the impulses and the space of thoughts
link |
00:23:19.660
that this individual can have at that moment.
link |
00:23:21.580
Under these circumstances,
link |
00:23:22.620
it makes no sense to attribute free will.
link |
00:23:26.060
Because it's no longer decision making under uncertainty.
link |
00:23:28.220
You are already certain.
link |
00:23:29.220
For them, there's uncertainty,
link |
00:23:30.500
but you already know what they're doing.
link |
00:23:33.780
But what about for you?
link |
00:23:34.980
So is this akin to like systems like cellular automata
link |
00:23:40.500
where it's deterministic,
link |
00:23:43.300
but when you squint your eyes a little bit,
link |
00:23:46.940
it starts to look like there's agents making decisions
link |
00:23:50.780
at the higher sort of when you zoom out
link |
00:23:53.780
and look at the entities
link |
00:23:55.020
that are composed by the individual cells.
link |
00:23:58.460
Even though there's underlying simple rules
link |
00:24:02.060
that make the system evolve in deterministic ways,
link |
00:24:07.540
it looks like there's organisms making decisions.
link |
00:24:10.780
Is that where the illusion of free will emerges,
link |
00:24:14.500
that jump in scale?
link |
00:24:16.740
It's a particular type of model,
link |
00:24:18.500
but this jump in scale is crucial.
link |
00:24:20.700
The jump in scale happens whenever
link |
00:24:22.380
you have too many parts to count
link |
00:24:23.820
and you cannot make a model at that level
link |
00:24:25.780
and you try to find some higher level regularity.
link |
00:24:28.780
And the higher level regularity is a pattern
link |
00:24:30.900
that you project into the world to make sense of it.
link |
00:24:34.660
And agency is one of these patterns, right?
link |
00:24:36.460
You have all these cells that interact with each other
link |
00:24:39.700
and the cells in our body are set up in such a way
link |
00:24:42.220
that they benefit if their behavior is coherent,
link |
00:24:45.060
which means that they act
link |
00:24:46.580
as if they were serving a common goal.
link |
00:24:49.180
And that means that they will evolve regulation mechanisms
link |
00:24:52.340
that act as if they were serving a common goal.
link |
00:24:55.300
And now you can make sense of all these cells
link |
00:24:57.620
by projecting the common goal into them.
link |
00:24:59.900
Right, so for you then, free will is an illusion.
link |
00:25:03.340
No, it's a model and it's a construct.
link |
00:25:06.460
It's basically a model that the system is making
link |
00:25:08.580
of its own behavior.
link |
00:25:09.420
And it's the best model that it can come up with
link |
00:25:11.500
under the circumstances.
link |
00:25:12.740
And it can get replaced by a different model,
link |
00:25:14.740
which is automatic behavior,
link |
00:25:16.420
when you fully understand the mechanism
link |
00:25:17.980
under which you are acting.
link |
00:25:19.180
Yeah, but another word for model is what, story.
link |
00:25:23.860
So it's the story you're telling.
link |
00:25:25.300
I mean, do you actually have control?
link |
00:25:27.340
Is there such a thing as a you
link |
00:25:29.420
and is there such a thing as you have in control?
link |
00:25:33.980
So like, are you manifesting your evolution as an entity?
link |
00:25:42.020
In some sense, the you is the model of the system
link |
00:25:44.380
that is in control.
link |
00:25:45.660
It's a story that the system tells itself
link |
00:25:47.860
about somebody who is in control.
link |
00:25:50.340
Yeah.
link |
00:25:51.180
And the contents of that model are being used
link |
00:25:53.060
to inform the behavior of the system.
link |
00:25:56.940
Okay.
link |
00:25:57.780
So the system is completely mechanical
link |
00:26:00.500
and the system creates that story like a loom.
link |
00:26:03.300
And then it uses the contents of that story
link |
00:26:06.020
to inform its actions
link |
00:26:07.460
and writes the results of that actions into the story.
link |
00:26:11.220
So how's that not an illusion?
link |
00:26:13.380
The story is written then,
link |
00:26:16.220
or rather we're not the writers of the story.
link |
00:26:21.260
Yes, but we always knew that.
link |
00:26:24.060
No, we don't know that.
link |
00:26:25.300
When did we know that?
link |
00:26:26.740
I think that's mostly a confusion about concepts.
link |
00:26:29.260
The conceptual illusion in our culture
link |
00:26:31.980
comes from the idea that we live in physical reality
link |
00:26:35.700
and that we experience physical reality
link |
00:26:37.460
and that you have ideas about it.
link |
00:26:39.500
And then you have this dualist interpretation
link |
00:26:41.660
where you have two substances, res extensa,
link |
00:26:45.060
the world that you can touch
link |
00:26:46.940
and that is made of extended things
link |
00:26:48.980
and res cogitans, which is the world of ideas.
link |
00:26:51.620
And in fact, both of them are mental representations.
link |
00:26:54.580
One is the representations of the world as a game engine
link |
00:26:57.900
that your mind generates to make sense of the perceptual data.
link |
00:27:01.100
And the other one,
link |
00:27:02.260
yes, that's what we perceive as the physical world.
link |
00:27:04.460
But we already know that the physical world
link |
00:27:05.940
is nothing like that, right?
link |
00:27:07.020
Quantum mechanics is very different
link |
00:27:08.860
from what you and me perceive as the world.
link |
00:27:11.340
The world that you and me perceive as a game engine.
link |
00:27:14.820
And there are no colors and sounds in the physical world.
link |
00:27:17.180
They only exist in the game engine generated by your brain.
link |
00:27:20.100
And then you have ideas
link |
00:27:21.500
that cannot be mapped onto extended regions, right?
link |
00:27:24.740
So the objects that have a spatial extension
link |
00:27:26.940
in the game engine, res extensa,
link |
00:27:29.500
and the objects that don't have a physical extension
link |
00:27:31.460
in the game engine are ideas.
link |
00:27:34.540
And they both interact in our mind
link |
00:27:36.140
to produce models of the world.
link |
00:27:38.220
Yep, but, you know, when you play video games,
link |
00:27:42.780
I understand that what's actually happening
link |
00:27:45.020
is zeros and ones inside of a computer,
link |
00:27:50.020
inside of a CPU and a GPU,
link |
00:27:52.820
but you're still seeing like the rendering of that.
link |
00:27:58.140
And you're still making decisions,
link |
00:28:00.700
whether to shoot, to turn left or to turn right,
link |
00:28:03.740
if you're playing a shooter,
link |
00:28:04.660
or every time I started thinking about Skyrim
link |
00:28:07.100
and Elder Scrolls and walking around in beautiful nature
link |
00:28:09.860
and swinging a sword.
link |
00:28:10.900
But it feels like you're making decisions
link |
00:28:13.100
inside that video game.
link |
00:28:15.060
So even though you don't have direct access
link |
00:28:17.220
in terms of perception to the bits,
link |
00:28:21.220
to the zeros and ones,
link |
00:28:22.660
it still feels like you're making decisions
link |
00:28:24.860
and your decisions actually feels
link |
00:28:27.900
like they're being applied all the way down
link |
00:28:30.740
to the zeros and ones.
link |
00:28:32.300
So it feels like you have control,
link |
00:28:33.460
even though you don't have direct access to reality.
link |
00:28:36.540
So there is basically a special character
link |
00:28:38.780
in the video game that is being created
link |
00:28:40.420
by the video game engine.
link |
00:28:42.100
And this character is serving the aesthetics
link |
00:28:43.820
of the video game, and that is you.
link |
00:28:47.060
Yes, but I feel like I have control inside the video game.
link |
00:28:50.900
Like all those like 12 year olds
link |
00:28:53.060
that kick my ass on the internet.
link |
00:28:55.420
So when you play the video game,
link |
00:28:57.700
it doesn't really matter that there's zeros and ones, right?
link |
00:28:59.900
You don't care about the bits of the past.
link |
00:29:01.700
You don't care about the nature of the CPU
link |
00:29:03.540
that it runs on.
link |
00:29:04.460
What you care about are the properties of the game
link |
00:29:06.700
that you're playing.
link |
00:29:07.780
And you hope that the CPU is good enough.
link |
00:29:10.060
Yes.
link |
00:29:10.900
And a similar thing happens when we interact with physics.
link |
00:29:13.300
The world that you and me are in is not the physical world.
link |
00:29:15.980
The world that you and me are in is a dream world.
link |
00:29:19.580
How close is it to the real world though?
link |
00:29:23.420
We know that it's not very close,
link |
00:29:25.020
but we know that the dynamics of the dream world
link |
00:29:27.500
match the dynamics of the physical world
link |
00:29:29.300
to a certain degree of resolution.
link |
00:29:31.060
But the causal structure of the dream world is different.
link |
00:29:35.220
So you see for instance waves crashing on your feet, right?
link |
00:29:38.180
But there are no waves in the ocean.
link |
00:29:39.420
There's only water molecules that have tangents
link |
00:29:42.420
between the molecules that are the result of electrons
link |
00:29:47.340
in the molecules interacting with each other.
link |
00:29:50.060
Aren't they like very consistent?
link |
00:29:52.140
We're just seeing a very crude approximation.
link |
00:29:55.700
Isn't our dream world very consistent,
link |
00:29:59.340
like to the point of being mapped directly one to one
link |
00:30:02.980
to the actual physical world
link |
00:30:04.260
as opposed to us being completely tricked?
link |
00:30:07.660
Is this is like where you have like Donald?
link |
00:30:09.220
It's not a trick.
link |
00:30:10.060
That's my point.
link |
00:30:10.900
It's not an illusion.
link |
00:30:11.860
It's a form of data compression.
link |
00:30:13.700
It's an attempt to deal with the dynamics
link |
00:30:15.420
of too many parts to count
link |
00:30:16.940
at the level at which we are entangled
link |
00:30:18.700
with the best model that you can find.
link |
00:30:20.740
Yeah, so we can act in that dream world
link |
00:30:22.700
and our actions have impact in the real world,
link |
00:30:26.140
in the physical world to which we don't have access.
link |
00:30:28.620
Yes, but it's basically like accepting the fact
link |
00:30:31.860
that the software that we live in,
link |
00:30:33.180
the dream that we live in is generated
link |
00:30:35.380
by something outside of this world that you and me are in.
link |
00:30:38.060
So is the software deterministic
link |
00:30:40.060
and do we not have any control?
link |
00:30:42.260
Do we have, so free will is having a conscious being.
link |
00:30:49.620
Free will is the monkey being able to steer the elephant.
link |
00:30:55.300
No, it's slightly different.
link |
00:30:58.060
Basically in the same way as you are modeling
link |
00:31:00.540
the water molecules in the ocean that engulf your feet
link |
00:31:03.460
when you are walking on the beach as waves
link |
00:31:05.980
and there are no waves,
link |
00:31:07.380
but only the atoms on more complicated stuff
link |
00:31:09.780
underneath the atoms and so on.
link |
00:31:11.820
And you know that, right?
link |
00:31:14.020
You would accept, yes,
link |
00:31:15.300
there is a certain abstraction that happens here.
link |
00:31:17.660
It's a simplification of what happens
link |
00:31:19.420
and the simplification that is designed
link |
00:31:22.100
in such a way that your brain can deal with it,
link |
00:31:24.260
temporarily and spatially in terms of resources
link |
00:31:27.020
and tuned for the predictive value.
link |
00:31:28.740
So you can predict with some accuracy
link |
00:31:31.220
whether your feet are going to get wet or not.
link |
00:31:33.380
But it's a really good interface and approximation.
link |
00:31:37.620
It says E equals MC squared is a good,
link |
00:31:40.340
equations are good approximation for,
link |
00:31:43.100
they're much better approximation.
link |
00:31:45.780
So to me, waves is a really nice approximation
link |
00:31:49.380
of what's all the complexity that's happening underneath.
link |
00:31:51.940
Basically it's a machine learning model
link |
00:31:53.140
that is constantly tuned to minimize surprises.
link |
00:31:55.580
So it basically tries to predict as well as it can
link |
00:31:58.540
what you're going to perceive next.
link |
00:31:59.780
Are we talking about, which is the machine learning?
link |
00:32:02.620
Our perception system or the dream world?
link |
00:32:05.700
The machine world, dream world is the result
link |
00:32:08.260
of the machine learning process of the perceptual system.
link |
00:32:11.180
That's doing the compression.
link |
00:32:12.220
Yes.
link |
00:32:13.060
And the model of you as an agent
link |
00:32:15.860
is not a different type of model or it's a different type,
link |
00:32:19.460
but not different as in its model like nature
link |
00:32:23.180
from the model of the ocean, right?
link |
00:32:25.540
Some things are oceans, some things are agents.
link |
00:32:28.260
And one of these agents is using your own control model,
link |
00:32:31.620
the output of your model,
link |
00:32:32.780
the things that you perceive yourself as doing.
link |
00:32:36.260
And that is you.
link |
00:32:38.180
What about the fact that when you're standing
link |
00:32:44.100
with the water on your feet and you're looking out
link |
00:32:47.340
into the vast open water of the ocean
link |
00:32:51.980
and then there's a beautiful sunset
link |
00:32:54.460
and the fact that it's beautiful
link |
00:32:56.540
and then maybe you have friends or a loved one with you
link |
00:32:59.180
and you feel love, what is that?
link |
00:33:00.900
As the dream world or what is that?
link |
00:33:02.700
Yes, it's all happening inside of the dream.
link |
00:33:05.620
Okay.
link |
00:33:06.860
But see, the word dream makes it seem like it's not real.
link |
00:33:11.380
No, of course it's not real.
link |
00:33:14.940
The physical universe is real,
link |
00:33:16.540
but the physical universe is incomprehensible
link |
00:33:18.620
and it doesn't have any feeling of realness.
link |
00:33:21.060
The feeling of realness that you experience
link |
00:33:22.900
gets attached to certain representations
link |
00:33:25.420
where your brain assesses,
link |
00:33:26.620
this is the best model of reality that I have.
link |
00:33:28.500
So the only thing that's real to you
link |
00:33:30.820
is the thing that's happening at the very base of reality.
link |
00:33:34.740
Yeah, for something to be real, it needs to be implemented.
link |
00:33:40.020
So the model that you have of reality
link |
00:33:42.340
is real in as far as it is a model.
link |
00:33:45.300
It's an appropriate description of the world
link |
00:33:47.860
to say that there are models that are being experienced,
link |
00:33:51.500
but the world that you experience
link |
00:33:54.700
is not necessarily implemented.
link |
00:33:56.900
There is a difference between a reality,
link |
00:33:59.380
a simulation and a simulacrum.
link |
00:34:02.220
The reality that we're talking about
link |
00:34:04.460
is something that fully emerges
link |
00:34:06.060
over a causally closed lowest layer.
link |
00:34:08.620
And the idea of physicalism is that we are in that layer,
link |
00:34:11.300
that basically our world emerges over that.
link |
00:34:13.460
Every alternative to physicalism is a simulation theory,
link |
00:34:16.060
which basically says that we are
link |
00:34:17.980
in some kind of simulation universe
link |
00:34:19.460
and the real world needs to be in a parent universe of that,
link |
00:34:22.100
where the actual causal structure is, right?
link |
00:34:24.380
And when you look at the ocean and your own mind,
link |
00:34:27.660
you are looking at a simulation
link |
00:34:28.900
that explains what you're going to see next.
link |
00:34:31.460
So we are living in a simulation.
link |
00:34:32.860
Yes, but a simulation generated by our own brains.
link |
00:34:35.900
Yeah.
link |
00:34:36.740
And this simulation is different from the physical reality
link |
00:34:39.660
because the causal structure that is being produced,
link |
00:34:42.060
what you are seeing is different
link |
00:34:43.380
from the causal structure of physics.
link |
00:34:44.980
But consistent.
link |
00:34:46.780
Hopefully, if not, then you are going to end up
link |
00:34:49.780
in some kind of institution
link |
00:34:51.060
where people will take care of you
link |
00:34:52.220
because your behavior will be inconsistent, right?
link |
00:34:54.580
Your behavior needs to work in such a way
link |
00:34:57.220
that it's interacting with an accurately predictive
link |
00:35:00.140
model of reality.
link |
00:35:00.980
And if your brain is unable to make your model
link |
00:35:03.500
of reality predictive, you will need help.
link |
00:35:06.180
So what do you think about Donald Hoffman's argument
link |
00:35:10.260
that it doesn't have to be consistent,
link |
00:35:12.740
the dream world to what he calls like the interface
link |
00:35:17.820
to the actual physical reality,
link |
00:35:19.500
where there could be evolution?
link |
00:35:20.660
I think he makes an evolutionary argument,
link |
00:35:23.060
which is like, it could be an evolutionary advantage
link |
00:35:26.460
to have the dream world drift away from physical reality.
link |
00:35:30.940
I think that only works if you have tenure.
link |
00:35:32.780
As long as you're still interacting with the ground tools,
link |
00:35:35.260
your model needs to be somewhat predictive.
link |
00:35:38.980
Well, in some sense, humans have achieved a kind of tenure
link |
00:35:42.740
in the animal kingdom.
link |
00:35:45.100
Yeah.
link |
00:35:45.940
And at some point we became too big to fail,
link |
00:35:47.620
so we became postmodernist.
link |
00:35:51.420
It all makes sense now.
link |
00:35:52.660
We can just change the version of reality that we like.
link |
00:35:54.980
Oh man.
link |
00:35:56.500
Okay.
link |
00:35:57.380
Yeah, but basically you can do magic.
link |
00:36:00.220
You can change your assessment of reality,
link |
00:36:02.460
but eventually reality is going to come bite you in the ass
link |
00:36:05.580
if it's not predictive.
link |
00:36:06.820
Do you have a sense of what is that base layer
link |
00:36:11.220
of physical reality?
link |
00:36:12.580
You have like, so you have these attempts
link |
00:36:15.540
at the theories of everything,
link |
00:36:17.620
the very, very small of like strength theory,
link |
00:36:21.140
or what Stephen Wolfram talks about with the hyper grass.
link |
00:36:25.420
These are these tiny, tiny, tiny, tiny objects.
link |
00:36:28.540
And then there is more like quantum mechanics
link |
00:36:32.620
that's talking about objects that are much larger,
link |
00:36:34.900
but still very, very, very tiny.
link |
00:36:36.780
Do you have a sense of where the tiniest thing is
link |
00:36:40.060
that is like at the lowest level?
link |
00:36:42.900
The turtle at the very bottom.
link |
00:36:44.740
Do you have a sense what that turtle is?
link |
00:36:45.980
I don't think that you can talk about where it is
link |
00:36:48.580
because space is emerging over the activity of these things.
link |
00:36:51.580
So space, the coordinates only exist
link |
00:36:55.540
in relation to the things, other things.
link |
00:36:58.820
And so you could, in some sense, abstract it into locations
link |
00:37:01.740
that can hold information and trajectories
link |
00:37:04.300
that the information can take
link |
00:37:05.540
between the different locations.
link |
00:37:06.860
And this is how we construct our notion of space.
link |
00:37:10.380
And physicists usually have a notion of space
link |
00:37:14.100
that is continuous.
link |
00:37:15.700
And this is a point where I tend to agree
link |
00:37:19.140
with people like Stephen Wolfram
link |
00:37:20.980
who are very skeptical of the geometric notions.
link |
00:37:23.820
I think that geometry is the dynamics
link |
00:37:25.980
of too many parts to count.
link |
00:37:27.300
And when there are no infinities,
link |
00:37:30.820
if there were two infinities,
link |
00:37:32.500
you would be running into contradictions,
link |
00:37:34.220
which is in some sense what Gödel and Turing discovered
link |
00:37:37.780
in response to Hilbert's call.
link |
00:37:39.820
So there are no infinities.
link |
00:37:41.340
There are no infinities.
link |
00:37:42.180
Infinities fake.
link |
00:37:43.020
There is unboundedness, but if you have a language
link |
00:37:45.300
that talks about infinity, at some point,
link |
00:37:47.340
the language is going to contradict itself,
link |
00:37:49.580
which means it's no longer valid.
link |
00:37:51.660
In order to deal with infinities and mathematics,
link |
00:37:54.020
you have to postulate the existence initially.
link |
00:37:57.580
You cannot construct the infinities.
link |
00:37:59.180
And that's an issue, right?
link |
00:38:00.180
You cannot build up an infinity from zero.
link |
00:38:02.700
But in practice, you never do this, right?
link |
00:38:04.700
When you perform calculations,
link |
00:38:06.020
you only look at the dynamics of too many parts to count.
link |
00:38:09.060
And usually these numbers are not that large.
link |
00:38:13.420
They're not Googles or something.
link |
00:38:14.860
The infinities that we are dealing with in our universe
link |
00:38:18.540
are mathematically speaking, relatively small integers.
link |
00:38:23.180
And still what we're looking at is dynamics
link |
00:38:26.540
where a trillion things behave similar
link |
00:38:30.660
to a hundred trillion things
link |
00:38:32.660
or something that is very, very large
link |
00:38:37.860
because they're converging.
link |
00:38:39.260
And these convergent dynamics, these operators,
link |
00:38:41.380
this is what we deal with when we are doing the geometry.
link |
00:38:45.020
Geometry is stuff where we can pretend that it's continuous
link |
00:38:48.420
because if we subdivide the space sufficiently fine grained,
link |
00:38:54.100
these things approach a certain dynamic.
link |
00:38:56.140
And this approach dynamic, that is what we mean by it.
link |
00:38:59.220
But I don't think that infinity would work, so to speak,
link |
00:39:02.860
that you would know the last digit of pi
link |
00:39:05.060
and that you have a physical process
link |
00:39:06.540
that rests on knowing the last digit of pi.
link |
00:39:09.420
Yeah, that could be just a peculiar quirk
link |
00:39:12.020
of human cognition that we like discrete.
link |
00:39:15.100
Discrete makes sense to us.
link |
00:39:16.660
Infinity doesn't, so in terms of our intuitions.
link |
00:39:19.900
No, the issue is that everything that we think about
link |
00:39:22.940
needs to be expressed in some kind of mental language,
link |
00:39:25.660
not necessarily natural language,
link |
00:39:27.740
but some kind of mathematical language
link |
00:39:29.860
that your neurons can speak
link |
00:39:31.700
that refers to something in the world.
link |
00:39:34.140
And what we have discovered
link |
00:39:35.460
is that we cannot construct a notion of infinity
link |
00:39:39.020
without running into contradictions,
link |
00:39:40.540
which means that such a language is no longer valid.
link |
00:39:43.620
And I suspect this is what made Pythagoras so unhappy
link |
00:39:46.780
when somebody came up with the notion of irrational numbers
link |
00:39:49.380
before it was time, right?
link |
00:39:50.420
There's this myth that he had this person killed
link |
00:39:52.700
when he blabbed out the secret
link |
00:39:54.140
that not everything can be expressed
link |
00:39:55.740
as a ratio between two numbers,
link |
00:39:57.300
but there are numbers between the ratios.
link |
00:39:59.740
The world was not ready for this.
link |
00:40:01.060
And I think he was right.
link |
00:40:02.380
That has confused mathematicians very seriously
link |
00:40:06.060
because these numbers are not values, they are functions.
link |
00:40:09.660
And so you can calculate these functions
link |
00:40:11.580
to a certain degree of approximation,
link |
00:40:13.260
but you cannot pretend that pi has actually a value.
link |
00:40:17.060
Pi is a function that would approach this value
link |
00:40:20.020
to some degree,
link |
00:40:21.500
but nothing in the world rests on knowing pi.
link |
00:40:26.980
How important is this distinction
link |
00:40:28.620
between discrete and continuous for you to get to the book?
link |
00:40:32.180
Because there's a, I mean, in discussion of your favorite
link |
00:40:36.580
flavor of the theory of everything,
link |
00:40:39.180
there's a few on the table.
link |
00:40:41.140
So there's string theory, there's a particular,
link |
00:40:45.260
there's a little quantum gravity,
link |
00:40:48.180
which focused on one particular unification.
link |
00:40:53.180
There's just a bunch of favorite flavors
link |
00:40:56.140
of different people trying to propose
link |
00:40:59.460
a theory of everything.
link |
00:41:01.260
Eric Weinstein and a bunch of people throughout history.
link |
00:41:04.780
And then of course, Stephen Wolfram,
link |
00:41:06.660
who I think is one of the only people doing a discrete.
link |
00:41:10.860
No, no, there's a bunch of physicists
link |
00:41:12.580
who do this right now.
link |
00:41:13.700
And like Toffoli and Tomasello.
link |
00:41:17.660
And digital physics is something
link |
00:41:21.940
that is, I think, growing in popularity.
link |
00:41:24.460
But the main reason why this is interesting
link |
00:41:29.460
is because it's important sometimes to settle disagreements.
link |
00:41:34.700
I don't think that you need infinities at all,
link |
00:41:36.980
and you never needed them.
link |
00:41:38.940
You can always deal with very large numbers
link |
00:41:40.900
and you can deal with limits, right?
link |
00:41:42.260
We are fine with doing that.
link |
00:41:43.780
You don't need any kind of infinity.
link |
00:41:45.300
You can build your computer algebra systems just as well
link |
00:41:48.340
without believing in infinity in the first place.
link |
00:41:50.300
So you're okay with limits?
link |
00:41:51.940
Yeah, so basically a limit means that something
link |
00:41:54.420
is behaving pretty much the same
link |
00:41:57.460
if you make the number large.
link |
00:41:59.100
Right, because it's converging to a certain value.
link |
00:42:02.420
And at some point the difference becomes negligible
link |
00:42:04.780
and you can no longer measure it.
link |
00:42:06.620
And in this sense, you have things
link |
00:42:08.660
that if you have an ngon which has enough corners,
link |
00:42:12.820
then it's going to behave like a circle at some point, right?
link |
00:42:15.180
And it's only going to be in some kind of esoteric thing
link |
00:42:18.380
that cannot exist in the physical universe
link |
00:42:21.060
that you would be talking about this perfect circle.
link |
00:42:23.820
And now it turns out that it also wouldn't work
link |
00:42:25.900
in mathematics because you cannot construct mathematics
link |
00:42:28.380
that has infinite resolution
link |
00:42:30.020
without running into contradictions.
link |
00:42:32.820
So that is itself not that important
link |
00:42:35.020
because we never did that, right?
link |
00:42:36.220
It's just a thing that some people thought we could.
link |
00:42:39.020
And this leads to confusion.
link |
00:42:40.780
So for instance, Roger Penrose uses this as an argument
link |
00:42:43.580
to say that there are certain things
link |
00:42:46.140
that mathematicians can do dealing with infinities
link |
00:42:50.580
and by extension our mind can do
link |
00:42:53.220
that computers cannot do.
link |
00:42:55.180
Yeah, he talks about that the human mind
link |
00:42:58.420
can do certain mathematical things
link |
00:43:00.780
that the computer as defined
link |
00:43:02.900
by the universal Turing machine cannot.
link |
00:43:06.140
Yes.
link |
00:43:07.180
So that it has to do with infinity.
link |
00:43:08.900
Yes, it's one of the things.
link |
00:43:10.260
So he is basically pointing at the fact
link |
00:43:13.100
that there are things that are possible
link |
00:43:15.580
in the mathematical mind and in pure mathematics
link |
00:43:21.420
that are not possible in machines
link |
00:43:24.100
that can be constructed in the physical universe.
link |
00:43:27.060
And because he's an honest guy,
link |
00:43:29.140
he thinks this means that present physics
link |
00:43:31.660
cannot explain operations that happen in our mind.
link |
00:43:34.860
Do you think he's right?
link |
00:43:35.780
And so let's leave his discussion
link |
00:43:38.700
of consciousness aside for the moment.
link |
00:43:40.780
Do you think he's right about just
link |
00:43:42.780
what he's basically referring to as intelligence?
link |
00:43:46.060
So is the human mind fundamentally more capable
link |
00:43:50.780
as a thinking machine than a universal Turing machine?
link |
00:43:53.940
No.
link |
00:43:55.460
But so he's suggesting that, right?
link |
00:43:58.740
So our mind is actually less than a Turing machine.
link |
00:44:01.020
There can be no Turing machine
link |
00:44:02.100
because it's defined as having an infinite tape.
link |
00:44:05.100
And we always only have a finite tape.
link |
00:44:07.260
But he's saying it's better.
link |
00:44:08.100
Our minds can only perform finitely many operations.
link |
00:44:10.140
Yes, he thinks so.
link |
00:44:10.980
He's saying it can do the kind of computation
link |
00:44:13.100
that the Turing machine cannot.
link |
00:44:14.620
And that's because he thinks that our minds
link |
00:44:16.660
can do operations that have infinite resolution
link |
00:44:19.500
in some sense.
link |
00:44:21.020
And I don't think that's the case.
link |
00:44:23.260
Our minds are just able to discover these limit operators
link |
00:44:26.340
over too many parts to count.
link |
00:44:27.700
I see.
link |
00:44:30.300
What about his idea that consciousness
link |
00:44:32.740
is more than a computation?
link |
00:44:37.460
So it's more than something that a Turing machine can do.
link |
00:44:42.100
So again, saying that there's something special
link |
00:44:44.540
about our mind that cannot be replicated in a machine.
link |
00:44:49.820
The issue is that I don't even know
link |
00:44:51.380
how to construct a language to express
link |
00:44:54.300
this statement correctly.
link |
00:44:56.420
Well,
link |
00:45:01.100
the basic statement is there's a human experience
link |
00:45:06.900
that includes intelligence, that includes self awareness,
link |
00:45:09.420
that includes the hard problem of consciousness.
link |
00:45:12.980
And the question is, can that be fully simulated
link |
00:45:16.860
in the computer, in the mathematical model of the computer
link |
00:45:20.940
as we understand it today?
link |
00:45:23.620
Roger Penrose says no.
link |
00:45:25.900
So the universe of Turing machine
link |
00:45:30.220
cannot simulate the universe.
link |
00:45:32.460
So the interesting question is,
link |
00:45:34.420
and you have to ask him this is, why not?
link |
00:45:36.500
What is this specific thing that cannot be modeled?
link |
00:45:39.900
And when I looked at his writings
link |
00:45:42.340
and I haven't read all of it,
link |
00:45:43.540
but when I read, for instance,
link |
00:45:45.940
the section that he writes in the introduction
link |
00:45:49.060
to a road to infinity,
link |
00:45:51.020
the thing that he specifically refers to
link |
00:45:53.260
is the way in which human minds deal with infinities.
link |
00:45:57.660
And that itself can, I think, easily be deconstructed.
link |
00:46:03.060
A lot of people feel that our experience
link |
00:46:05.580
cannot be explained in a mechanical way.
link |
00:46:08.660
And therefore it needs to be different.
link |
00:46:11.060
And I concur, our experience is not mechanical.
link |
00:46:14.500
Our experience is simulated.
link |
00:46:16.700
It exists only in a simulation.
link |
00:46:18.420
The only simulation can be conscious.
link |
00:46:19.980
Physical systems cannot be conscious
link |
00:46:21.580
because they're only mechanical.
link |
00:46:23.020
Cells cannot be conscious.
link |
00:46:25.100
Neurons cannot be conscious.
link |
00:46:26.300
Brains cannot be conscious.
link |
00:46:27.460
People cannot be conscious
link |
00:46:28.660
as far as if you understand them as physical systems.
link |
00:46:31.620
What can be conscious is the story of the system
link |
00:46:36.220
in the world where you write all these things
link |
00:46:37.980
into the story.
link |
00:46:39.420
You have experiences for the same reason
link |
00:46:41.420
that a character novel has experiences
link |
00:46:43.260
because it's written into the story.
link |
00:46:45.780
And now the system is acting on that story.
link |
00:46:48.220
And it's not a story that is written in a natural language.
link |
00:46:50.660
It's written in a perceptual language,
link |
00:46:52.500
in this multimedia language of the game engine.
link |
00:46:55.380
And in there, you write in what kind of experience you have
link |
00:46:59.340
and what this means for the behavior of the system,
link |
00:47:01.460
for your behavior tendencies, for your focus,
link |
00:47:03.700
for your attention, for your experience of valence
link |
00:47:05.460
and so on.
link |
00:47:06.420
And this is being used to inform the behavior of the system
link |
00:47:09.620
in the next step.
link |
00:47:10.740
And then the story updates with the reactions of the system
link |
00:47:15.780
and the changes in the world and so on.
link |
00:47:17.780
And you live inside of that model.
link |
00:47:19.340
You don't live inside of the physical reality.
link |
00:47:23.420
And I mean, just to linger on it, like you say, okay,
link |
00:47:28.820
it's in the perceptual language,
link |
00:47:30.860
the multimodal perceptual language.
link |
00:47:33.300
That's the experience.
link |
00:47:34.900
That's what consciousness is within that model,
link |
00:47:38.900
within that story.
link |
00:47:40.860
But do you have agency?
link |
00:47:43.980
When you play a video game, you can turn left
link |
00:47:46.020
and you can turn right in that story.
link |
00:47:49.620
So in that dream world, how much control do you have?
link |
00:47:54.220
Is there such a thing as you in that story?
link |
00:47:57.620
Like, is it right to say the main character,
link |
00:48:00.980
you know, everybody's NPCs,
link |
00:48:02.540
and then there's the main character
link |
00:48:04.380
and you're controlling the main character?
link |
00:48:07.020
Or is that an illusion?
link |
00:48:08.700
Is there a main character that you're controlling?
link |
00:48:10.900
I'm getting to the point of like the free will point.
link |
00:48:14.540
Imagine that you are building a robot that plays soccer.
link |
00:48:17.780
And you've been to MIT computer science,
link |
00:48:19.900
you basically know how to do that, right?
link |
00:48:22.060
And so you would say the robot is an agent
link |
00:48:25.300
that solves a control problem,
link |
00:48:27.780
how to get the ball into the goal.
link |
00:48:29.300
And it needs to perceive the world
link |
00:48:30.740
and the world is disturbing him in trying to do this, right?
link |
00:48:33.260
So he has to control many variables to make that happen
link |
00:48:35.660
and to project itself and the ball into the future
link |
00:48:38.820
and understand its position on the field
link |
00:48:40.700
relative to the ball and so on,
link |
00:48:42.140
and the position of its limbs
link |
00:48:44.620
or in the space around it and so on.
link |
00:48:46.940
So it needs to have an adequate model
link |
00:48:48.460
that abstracting reality in a useful way.
link |
00:48:51.380
And you could say that this robot does have agency
link |
00:48:55.900
over what it's doing in some sense.
link |
00:48:58.420
And the model is going to be a control model.
link |
00:49:01.500
And inside of that control model,
link |
00:49:03.060
you can possibly get to a point
link |
00:49:05.780
where this thing is sufficiently abstract
link |
00:49:07.820
to discover its own agency.
link |
00:49:09.540
Our current robots don't do that.
link |
00:49:10.860
They don't have a unified model of the universe,
link |
00:49:13.140
but there's not a reason why we shouldn't be getting there
link |
00:49:16.140
at some point in the not too distant future.
link |
00:49:18.660
And once that happens,
link |
00:49:20.060
you will notice that the robot tells a story
link |
00:49:23.220
about a robot playing soccer.
link |
00:49:25.980
So the robot will experience itself playing soccer
link |
00:49:29.420
in a simulation of the world that it uses
link |
00:49:32.060
to construct a model of the locations of its legs
link |
00:49:35.340
and limbs in space on the field
link |
00:49:38.180
with relationship to the ball.
link |
00:49:39.380
And it's not going to be at the level of the molecules.
link |
00:49:42.220
It will be an abstraction that is exactly at the level
link |
00:49:45.300
that is most suitable for past planning
link |
00:49:47.420
of the movements of the robot.
link |
00:49:49.940
It's going to be a high level abstraction,
link |
00:49:51.420
but a very useful one that is as predictive
link |
00:49:53.700
as we can make it.
link |
00:49:55.180
And in that side of that story,
link |
00:49:56.580
there is a model of the agency of that system.
link |
00:49:58.780
So this model can accurately predict
link |
00:50:03.060
that the contents of the model
link |
00:50:04.740
are going to be driving the behavior of the robot
link |
00:50:07.380
in the immediate future.
link |
00:50:08.900
But there's the hard problem of consciousness,
link |
00:50:12.340
which I would also,
link |
00:50:14.580
there's a subjective experience of free will as well
link |
00:50:18.060
that I'm not sure where the robot gets that,
link |
00:50:20.740
where that little leap is.
link |
00:50:22.660
Because for me right now,
link |
00:50:24.260
everything I imagine with that robot,
link |
00:50:26.260
as it gets more and more and more sophisticated,
link |
00:50:29.020
the agency comes from the programmer of the robot still,
link |
00:50:33.540
of what was programmed in.
link |
00:50:35.820
You could probably do an end to end learning system.
link |
00:50:38.500
You maybe need to give it a few priors.
link |
00:50:40.300
So you nudge the architecture in the right direction
link |
00:50:42.460
that it converges more quickly,
link |
00:50:44.340
but ultimately discovering the suitable hyperparameters
link |
00:50:47.980
of the architecture is also only a search process.
link |
00:50:50.340
And as the search process was evolution,
link |
00:50:52.740
that has informed our brain architecture
link |
00:50:55.300
so we can converge in a single lifetime
link |
00:50:57.380
on useful interaction with the world
link |
00:50:59.500
and the formation of a self model.
link |
00:51:00.340
The problem is if we define hyperparameters broadly,
link |
00:51:03.500
so it's not just the parameters that control
link |
00:51:06.820
this end to end learning system,
link |
00:51:08.700
but the entirety of the design of the robot.
link |
00:51:11.180
Like there's, you have to remove the human completely
link |
00:51:15.060
from the picture.
link |
00:51:15.900
And then in order to build the robot,
link |
00:51:17.300
you have to create an entire universe.
link |
00:51:20.340
Cause you have to go, you can't just shortcut evolution.
link |
00:51:22.620
You have to go from the very beginning
link |
00:51:24.620
in order for it to have,
link |
00:51:25.860
cause I feel like there's always a human
link |
00:51:28.020
pulling the strings and that makes it seem like
link |
00:51:32.620
the robot is cheating.
link |
00:51:33.900
It's getting a shortcut to consciousness.
link |
00:51:35.940
And you are looking at the current Boston Dynamics robots.
link |
00:51:38.300
It doesn't look as if there is somebody
link |
00:51:40.140
pulling the strings.
link |
00:51:40.980
It doesn't look like cheating anymore.
link |
00:51:42.420
Okay, so let's go there.
link |
00:51:43.420
Cause I got to talk to you about this.
link |
00:51:44.860
So obviously with the case of Boston Dynamics,
link |
00:51:47.740
as you may or may not know,
link |
00:51:49.780
it's always either hard coded or remote controlled.
link |
00:51:54.100
There's no intelligence.
link |
00:51:55.220
I don't know how the current generation
link |
00:51:57.460
of Boston Dynamics robots works,
link |
00:51:59.060
but what I've been told about the previous ones
link |
00:52:02.020
was that it's basically all cybernetic control,
link |
00:52:05.260
which means you still have feedback mechanisms and so on,
link |
00:52:08.620
but it's not deep learning for the most part
link |
00:52:11.340
as it's currently done.
link |
00:52:13.220
It's for the most part,
link |
00:52:14.700
just identifying a control hierarchy
link |
00:52:16.940
that is congruent to the limbs that exist
link |
00:52:19.820
and the parameters that need to be optimized
link |
00:52:21.460
for the movement of these limbs.
link |
00:52:22.580
And then there is a convergence progress.
link |
00:52:24.500
So it's basically just regression
link |
00:52:26.220
that you would need to control this.
link |
00:52:27.900
But again, I don't know whether that's true.
link |
00:52:29.420
That's just what I've been told about how they work.
link |
00:52:31.420
We have to separate several levels of discussion here.
link |
00:52:35.020
So the only thing they do is pretty sophisticated control
link |
00:52:39.300
with no machine learning
link |
00:52:40.900
in order to maintain balance or to right itself.
link |
00:52:45.980
It's a control problem in terms of using the actuators
link |
00:52:49.380
to when it's pushed or when it steps on a thing
link |
00:52:52.420
that's uneven, how to always maintain balance.
link |
00:52:55.420
And there's a tricky set of heuristics around that,
link |
00:52:57.940
but that's the only goal.
link |
00:53:00.460
Everything you see Boston Dynamics doing
link |
00:53:02.660
in terms of that to us humans is compelling,
link |
00:53:06.140
which is any kind of higher order movement,
link |
00:53:09.460
like turning, wiggling its butt,
link |
00:53:13.220
like jumping back on its two feet, dancing.
link |
00:53:18.740
Dancing is even worse because dancing is hard coded in.
link |
00:53:22.460
It's choreographed by humans.
link |
00:53:25.300
There's choreography software.
link |
00:53:27.420
So there is no, of all that high level movement,
link |
00:53:30.900
there's no anything that you can call,
link |
00:53:34.220
certainly can't call AI,
link |
00:53:35.940
but there's no even like basic heuristics.
link |
00:53:39.500
It's all hard coded in.
link |
00:53:41.060
And yet we humans immediately project agency onto them,
link |
00:53:47.660
which is fascinating.
link |
00:53:48.900
So the gap here doesn't necessarily have agency.
link |
00:53:53.140
What it has is cybernetic control.
link |
00:53:55.340
And the cybernetic control means you have a hierarchy
link |
00:53:57.420
of feedback loops that keep the behavior
link |
00:53:59.740
in certain boundaries so the robot doesn't fall over
link |
00:54:02.340
and it's able to perform the movements.
link |
00:54:04.140
And the choreography cannot really happen
link |
00:54:06.660
with motion capture because the robot would fall over
link |
00:54:09.220
because the physics of the robot,
link |
00:54:10.620
the weight distribution and so on is different
link |
00:54:12.780
from the weight distribution in the human body.
link |
00:54:15.340
So if you were using the directly motion captured movements
link |
00:54:19.580
of a human body to project it into this robot,
link |
00:54:21.740
it wouldn't work.
link |
00:54:22.580
You can do this with a computer animation.
link |
00:54:24.100
It will look a little bit off, but who cares?
link |
00:54:26.140
But if you want to correct for the physics,
link |
00:54:29.100
you need to basically tell the robot
link |
00:54:31.500
where it should move its limbs.
link |
00:54:33.740
And then the control algorithm is going
link |
00:54:35.860
to approximate a solution that makes it possible
link |
00:54:38.980
within the physics of the robot.
link |
00:54:41.020
And you have to find the basic solution
link |
00:54:43.900
for making that happen.
link |
00:54:44.780
And there's probably going to be some regression necessary
link |
00:54:47.580
to get the control architecture to make these movements.
link |
00:54:51.220
But those two layers are separate.
link |
00:54:52.660
So the thing, the higher level instruction
link |
00:54:56.180
of how you should move and where you should move
link |
00:54:59.060
is a higher level.
link |
00:54:59.900
Yeah, so I expect that the control level
link |
00:55:01.700
of these robots at some level is dumb.
link |
00:55:03.620
This is just the physical control movement,
link |
00:55:06.180
the motor architecture.
link |
00:55:07.860
But it's a relatively smart motor architecture.
link |
00:55:10.340
It's just that there is no high level deliberation
link |
00:55:12.500
about what decisions to make necessarily, right?
link |
00:55:14.620
But see, it doesn't feel like free will or consciousness.
link |
00:55:17.900
No, no, that was not where I was trying to get to.
link |
00:55:20.580
I think that in our own body, we have that too.
link |
00:55:24.540
So we have a certain thing that is basically
link |
00:55:26.900
just a cybernetic control architecture
link |
00:55:29.540
that is moving our limbs.
link |
00:55:31.300
And deep learning can help in discovering
link |
00:55:34.300
such an architecture if you don't have it
link |
00:55:35.940
in the first place.
link |
00:55:37.220
If you already know your hardware,
link |
00:55:38.620
you can maybe handcraft it.
link |
00:55:40.700
But if you don't know your hardware,
link |
00:55:41.900
you can search for such an architecture.
link |
00:55:43.740
And this work already existed in the 80s and 90s.
link |
00:55:46.980
People were starting to search for control architectures
link |
00:55:49.820
by motor babbling and so on,
link |
00:55:51.140
and just use reinforcement learning architectures
link |
00:55:53.900
to discover such a thing.
link |
00:55:55.580
And now imagine that you have
link |
00:55:57.740
the cybernetic control architecture already inside of you.
link |
00:56:01.540
And you extend this a little bit.
link |
00:56:03.700
So you are seeking out food, for instance,
link |
00:56:06.460
or rest or and so on.
link |
00:56:08.300
And you get to have a baby at some point.
link |
00:56:11.820
And now you add more and more control layers to this.
link |
00:56:15.740
And the system is reverse engineering
link |
00:56:17.780
its own control architecture
link |
00:56:19.620
and builds a high level model to synchronize
link |
00:56:22.460
the pursuit of very different conflicting goals.
link |
00:56:26.340
And this is how I think you get to purposes.
link |
00:56:28.180
Purposes are models of your goals.
link |
00:56:30.060
The goals may be intrinsic
link |
00:56:31.540
as the result of the different set point violations
link |
00:56:33.820
that you have,
link |
00:56:34.660
hunger and thirst for very different things,
link |
00:56:37.140
and rest and pain avoidance and so on.
link |
00:56:39.380
And you put all these things together
link |
00:56:41.100
and eventually you need to come up with a strategy
link |
00:56:44.180
to synchronize them all.
link |
00:56:46.020
And you don't need just to do this alone by yourself
link |
00:56:49.340
because we are state building organisms.
link |
00:56:51.340
We cannot function as isolation
link |
00:56:53.700
the way that homo sapiens is set up.
link |
00:56:55.820
So our own behavior only makes sense
link |
00:56:58.100
when you zoom out very far into a society
link |
00:57:00.980
or even into ecosystemic intelligence on the planet
link |
00:57:04.900
and our place in it.
link |
00:57:06.500
So the individual behavior only makes sense
link |
00:57:08.460
in these larger contexts.
link |
00:57:09.980
And we have a number of priors built into us.
link |
00:57:11.820
So we are behaving as if we were acting
link |
00:57:14.660
on these high level goals pretty much right from the start.
link |
00:57:17.900
And eventually in the course of our life,
link |
00:57:19.820
we can reverse engineer the goals that we're acting on,
link |
00:57:22.700
what actually are our higher level purposes.
link |
00:57:25.820
And the more we understand that,
link |
00:57:27.100
the more our behavior makes sense.
link |
00:57:28.660
But this is all at this point,
link |
00:57:30.380
complex stories within stories
link |
00:57:32.420
that are driving our behavior.
link |
00:57:34.580
Yeah, I just don't know how big of a leap it is
link |
00:57:38.500
to start create a system
link |
00:57:40.940
that's able to tell stories within stories.
link |
00:57:44.340
Like how big of a leap that is
link |
00:57:45.580
from where currently Boston Dynamics is
link |
00:57:48.260
or any robot that's operating in the physical space.
link |
00:57:53.820
And that leap might be big
link |
00:57:56.220
if it requires to solve the hard problem of consciousness,
link |
00:57:59.380
which is telling a hell of a good story.
link |
00:58:01.620
I suspect that consciousness itself is relatively simple.
link |
00:58:05.220
What's hard is perception
link |
00:58:07.300
and the interface between perception and reasoning.
link |
00:58:11.100
That's for instance, the idea of the consciousness prior
link |
00:58:14.700
that would be built into such a system by Yoshua Bengio.
link |
00:58:18.740
And what he describes, and I think that's accurate,
link |
00:58:22.260
is that our own model of the world
link |
00:58:27.260
can be described through something like an energy function.
link |
00:58:29.820
The energy function is modeling the contradictions
link |
00:58:32.700
that exist within the model at any given point.
link |
00:58:34.820
And you try to minimize these contradictions,
link |
00:58:36.620
the tangents in the model.
link |
00:58:38.340
And to do this, you need to sometimes test things.
link |
00:58:41.380
You need to conditionally disambiguate figure and ground.
link |
00:58:43.740
You need to distinguish whether this is true
link |
00:58:46.500
or that is true, and so on.
link |
00:58:47.940
Eventually you get to an interpretation,
link |
00:58:49.500
but you will need to manually depress a few points
link |
00:58:52.300
in your model to let it snap into a state that makes sense.
link |
00:58:55.580
And this function that tries to get the biggest dip
link |
00:58:57.740
in the energy function in your model,
link |
00:58:59.620
according to Yoshua Bengio, is related to consciousness.
link |
00:59:02.340
It's a low dimensional discrete function
link |
00:59:04.620
that tries to maximize this dip in the energy function.
link |
00:59:09.580
Yeah, I think I would need to dig into details
link |
00:59:13.340
because I think the way he uses the word consciousness
link |
00:59:15.580
is more akin to like self awareness,
link |
00:59:17.780
like modeling yourself within the world,
link |
00:59:20.860
as opposed to the subjective experience, the hard problem.
link |
00:59:23.660
No, it's not even the self is in the world.
link |
00:59:26.580
The self is the agent and you don't need to be aware
link |
00:59:28.820
of yourself in order to be conscious.
link |
00:59:31.100
The self is just a particular content that you can have,
link |
00:59:34.380
but you don't have to have.
link |
00:59:35.980
But you can be conscious in, for instance, a dream at night
link |
00:59:39.700
or during a meditation state where you don't have a self.
link |
00:59:42.940
Right.
link |
00:59:43.780
Where you're just aware of the fact that you are aware.
link |
00:59:45.620
And what we mean by consciousness in the colloquial sense
link |
00:59:49.900
is largely this reflexive self awareness,
link |
00:59:53.820
that we become aware of the fact
link |
00:59:55.220
that you're paying attention,
link |
00:59:57.300
that we are the thing that pays attention.
link |
00:59:59.220
We are the thing that pays attention, right.
link |
01:00:02.020
I don't see where the awareness that we're aware,
link |
01:00:07.740
the hard problem doesn't feel like it's solved.
link |
01:00:10.620
I mean, it's called a hard problem for a reason,
link |
01:00:14.820
because it seems like there needs to be a major leap.
link |
01:00:19.380
Yeah, I think the major leap is to understand
link |
01:00:21.660
how it is possible that a machine can dream,
link |
01:00:25.300
that a physical system is able to create a representation
link |
01:00:29.540
that the physical system is acting on,
link |
01:00:31.260
and that is spun force and so on.
link |
01:00:33.980
But once you accept the fact that you are not in physics,
link |
01:00:36.700
but that you exist inside of the story,
link |
01:00:39.220
I think the mystery disappears.
link |
01:00:40.620
Everything is possible in the story.
link |
01:00:41.940
You exist inside the story.
link |
01:00:43.340
Okay, so the machine.
link |
01:00:44.180
Your consciousness is being written into the story.
link |
01:00:45.780
The fact that you experience things
link |
01:00:47.340
is written to the side of the story.
link |
01:00:48.860
You ask yourself, is this real what I'm seeing?
link |
01:00:51.300
And your brain writes into the story, yes, it's real.
link |
01:00:53.860
So what about the perception of consciousness?
link |
01:00:56.340
So to me, you look conscious.
link |
01:00:59.540
So the illusion of consciousness,
link |
01:01:02.500
the demonstration of consciousness.
link |
01:01:04.340
I ask for the legged robot.
link |
01:01:07.700
How do we make this legged robot conscious?
link |
01:01:10.580
So there's two things,
link |
01:01:12.820
and maybe you can tell me if they're neighboring ideas.
link |
01:01:16.340
One is actually make it conscious,
link |
01:01:18.860
and the other is make it appear conscious to others.
link |
01:01:22.620
Are those related?
link |
01:01:25.660
Let's ask it from the other direction.
link |
01:01:27.380
What would it take to make you not conscious?
link |
01:01:31.140
So when you are thinking about how you perceive the world,
link |
01:01:35.180
can you decide to switch from looking at qualia
link |
01:01:39.820
to looking at representational states?
link |
01:01:43.060
And it turns out you can.
link |
01:01:44.900
There is a particular way in which you can look at the world
link |
01:01:48.340
and recognize its machine nature, including your own.
link |
01:01:51.420
And in that state,
link |
01:01:52.420
you don't have that conscious experience
link |
01:01:54.260
in this way anymore.
link |
01:01:55.740
It becomes apparent as a representation.
link |
01:01:59.660
Everything becomes opaque.
link |
01:02:01.580
And I think this thing that you recognize,
link |
01:02:04.020
everything is a representation.
link |
01:02:05.380
This is typically what we mean with enlightenment states.
link |
01:02:09.100
And it can happen on the motivational level,
link |
01:02:11.700
but you can also do this on the experiential level,
link |
01:02:14.820
on the perceptual level.
link |
01:02:16.220
See, but then I can come back to a conscious state.
link |
01:02:20.420
Okay, I particularly,
link |
01:02:23.780
I'm referring to the social aspect
link |
01:02:26.940
that the demonstration of consciousness
link |
01:02:30.100
is a really nice thing at a party
link |
01:02:32.140
when you're trying to meet a new person.
link |
01:02:34.460
It's a nice thing to know that they're conscious
link |
01:02:38.300
and they can,
link |
01:02:41.020
I don't know how fundamental consciousness
link |
01:02:42.700
is in human interaction,
link |
01:02:43.900
but it seems like to be at least an important part.
link |
01:02:48.020
And I ask that in the same kind of way for robots.
link |
01:02:53.620
In order to create a rich, compelling
link |
01:02:56.340
human robot interaction,
link |
01:02:58.380
it feels like there needs to be elements of consciousness
link |
01:03:00.740
within that interaction.
link |
01:03:02.660
My cat is obviously conscious.
link |
01:03:04.900
And so my cat can do this party trick.
link |
01:03:07.380
She also knows that I am conscious,
link |
01:03:09.220
be able to have feedback about the fact
link |
01:03:11.380
that we are both acting on models of our own awareness.
link |
01:03:14.860
The question is how hard is it for the robot,
link |
01:03:19.660
artificially created robot to achieve cat level
link |
01:03:22.060
and party tricks?
link |
01:03:24.380
Yes, so the issue for me is currently not so much
link |
01:03:27.300
on how to build a system that creates a story
link |
01:03:30.300
about a robot that lives in the world,
link |
01:03:32.860
but to make an adequate representation of the world.
link |
01:03:36.540
And the model that you and me have is a unified one.
link |
01:03:40.260
It's one where you basically make sense of everything
link |
01:03:44.060
that you can perceive.
link |
01:03:44.980
Every feature in the world that enters your perception
link |
01:03:47.940
can be relationally mapped to a unified model of everything.
link |
01:03:51.780
And we don't have an AI that is able to construct
link |
01:03:54.060
such a unified model yet.
link |
01:03:56.220
So you need that unified model to do the party trick?
link |
01:03:58.820
Yes, I think that it doesn't make sense
link |
01:04:01.780
if this thing is conscious,
link |
01:04:03.060
but not in the same universe as you,
link |
01:04:04.660
because you could not relate to each other.
link |
01:04:06.780
So what's the process, would you say,
link |
01:04:08.980
of engineering consciousness in the machine?
link |
01:04:12.060
Like what are the ideas here?
link |
01:04:14.580
So you probably want to have some kind of perceptual system.
link |
01:04:19.060
This perceptual system is a processing agent
link |
01:04:21.300
that is able to track sensory data
link |
01:04:23.860
and predict the next frame in the sensory data
link |
01:04:26.740
from the previous frames of the sensory data
link |
01:04:29.780
and the current state of the system.
link |
01:04:31.740
So the current state of the system is, in perception,
link |
01:04:34.580
instrumental to predicting what happens next.
link |
01:04:37.580
And this means you build lots and lots of functions
link |
01:04:39.780
that take all the blips that you feel on your skin
link |
01:04:42.100
and that you see on your retina, or that you hear,
link |
01:04:45.540
and puts them into a set of relationships
link |
01:04:48.100
that allows you to predict what kind of sensory data,
link |
01:04:51.180
what kind of sensor of blips, vector of blips,
link |
01:04:53.780
you're going to perceive in the next frame.
link |
01:04:56.060
This is tuned and it's constantly tuned
link |
01:04:59.180
until it gets as accurate as it can.
link |
01:05:01.940
You build a very accurate prediction mechanism
link |
01:05:05.100
that is step one of the perception.
link |
01:05:08.060
So first you predict, then you perceive
link |
01:05:09.900
and see the error in your prediction.
link |
01:05:11.740
And you have to do two things to make that happen.
link |
01:05:13.820
One is you have to build a network of relationships
link |
01:05:16.900
that are constraints,
link |
01:05:18.460
that take all the variants in the world
link |
01:05:21.020
and put each of the variances into a variable
link |
01:05:24.420
that is connected with relationships to other variables.
link |
01:05:27.940
And these relationships are computable functions
link |
01:05:30.060
that constrain each other.
link |
01:05:31.140
So when you see a nose
link |
01:05:32.260
that points in a certain direction in space,
link |
01:05:34.900
you have a constraint that says
link |
01:05:36.100
there should be a face nearby that has the same direction.
link |
01:05:39.260
And if that is not the case,
link |
01:05:40.380
you have some kind of contradiction
link |
01:05:41.700
that you need to resolve
link |
01:05:42.540
because it's probably not a nose what you're looking at.
link |
01:05:44.620
It just looks like one.
link |
01:05:45.940
So you have to reinterpret the data
link |
01:05:48.580
until you get to a point where your model converges.
link |
01:05:52.460
And this process of making the sensory data
link |
01:05:54.940
fit into your model structure
link |
01:05:56.700
is what Piaget calls the assimilation.
link |
01:06:01.140
And accommodation is the change of the models
link |
01:06:04.060
where you change your model in such a way
link |
01:06:05.700
that you can assimilate everything.
link |
01:06:08.140
So you're talking about building
link |
01:06:09.860
a hell of an awesome perception system
link |
01:06:12.380
that's able to do prediction and perception
link |
01:06:14.700
and correct and keep improving.
link |
01:06:15.980
No, wait, that's...
link |
01:06:17.740
Wait, there's more.
link |
01:06:18.660
Yes, there's more.
link |
01:06:19.580
So the first thing that we wanted to do
link |
01:06:21.500
is we want to minimize the contradictions in the model.
link |
01:06:24.660
And of course, it's very easy to make a model
link |
01:06:26.700
in which you minimize the contradictions
link |
01:06:28.220
just by allowing that it can be
link |
01:06:29.700
in many, many possible states, right?
link |
01:06:31.500
So if you increase degrees of freedom,
link |
01:06:33.980
you will have fewer contradictions.
link |
01:06:35.860
But you also want to reduce the degrees of freedom
link |
01:06:37.820
because degrees of freedom mean uncertainty.
link |
01:06:40.260
You want your model to reduce uncertainty
link |
01:06:42.420
as much as possible,
link |
01:06:44.380
but reducing uncertainty is expensive.
link |
01:06:46.540
So you have to have a trade off
link |
01:06:47.780
between minimizing contradictions
link |
01:06:50.340
and reducing uncertainty.
link |
01:06:52.380
And you have only a finite amount of compute
link |
01:06:54.620
and experimental time and effort
link |
01:06:57.020
available to reduce uncertainty in the world.
link |
01:06:59.260
So you need to assign value to what you observe.
link |
01:07:02.740
So you need some kind of motivational system
link |
01:07:05.060
that is estimating what you should be looking at
link |
01:07:07.660
and what you should be thinking about it,
link |
01:07:09.180
how you should be applying your resources
link |
01:07:10.900
to model what that is, right?
link |
01:07:12.900
So you need to have something like convergence links
link |
01:07:15.940
that tell you how to get from the present state
link |
01:07:17.540
of the model to the next one.
link |
01:07:19.020
You need to have these compatibility links
link |
01:07:20.620
that tell you which constraints exist
link |
01:07:23.500
and which constraint violations exist.
link |
01:07:25.500
And you need to have some kind of motivational system
link |
01:07:28.900
that tells you what to pay attention to.
link |
01:07:30.700
So now we have a second agent next to the perceptual agent.
link |
01:07:32.980
We have a motivational agent.
link |
01:07:34.860
This is a cybernetic system
link |
01:07:36.260
that is modeling what the system needs,
link |
01:07:38.740
what's important for the system,
link |
01:07:40.460
and that interacts with the perceptual system
link |
01:07:42.100
to maximize the expected reward.
link |
01:07:44.540
And you're saying the motivational system
link |
01:07:46.020
is some kind of like, what is it?
link |
01:07:49.580
A high level narrative over some lower level.
link |
01:07:52.500
No, it's just your brainstem stuff,
link |
01:07:53.980
the limbic system stuff that tells you,
link |
01:07:55.660
okay, now you should get something to eat
link |
01:07:57.660
because I've just measured your blood sugar.
link |
01:07:59.380
So you mean like motivational system,
link |
01:08:00.940
like the lower level stuff, like hungry.
link |
01:08:03.060
Yes, there's basically physiological needs
link |
01:08:05.700
and some cognitive needs and some social needs
link |
01:08:07.500
and they all interact.
link |
01:08:08.420
And they're all implemented at different parts
link |
01:08:10.220
in your nervous system as the motivational system.
link |
01:08:12.660
But they're basically cybernetic feedback loops.
link |
01:08:14.700
It's not that complicated.
link |
01:08:16.420
It's just a lot of code.
link |
01:08:18.260
And so you now have a motivational agent
link |
01:08:21.420
that makes your robot go for the ball
link |
01:08:23.100
or that makes your worm go to eat food and so on.
link |
01:08:27.580
And you have the perceptual system
link |
01:08:29.140
that lets it predict the environment
link |
01:08:30.580
so it's able to solve that control problem to some degree.
link |
01:08:33.620
And now what we learned is that it's very hard
link |
01:08:35.780
to build a machine learning system
link |
01:08:37.220
that looks at all the data simultaneously
link |
01:08:39.340
to see what kind of relationships
link |
01:08:41.300
could exist between them.
link |
01:08:43.260
So you need to selectively model the world.
link |
01:08:45.580
You need to figure out where can I make the biggest difference
link |
01:08:48.300
if I would put the following things together.
link |
01:08:50.980
Sometimes you find a gradient for that.
link |
01:08:53.020
When you have a gradient,
link |
01:08:54.180
you don't need to remember where you came from.
link |
01:08:56.460
You just follow the gradient
link |
01:08:57.540
until it doesn't get any better.
link |
01:08:59.340
But if you have a world where the problems are discontinuous
link |
01:09:02.140
and the search spaces are discontinuous,
link |
01:09:04.260
you need to retain memory of what you explored.
link |
01:09:07.300
You need to construct a plan of what to explore next.
link |
01:09:10.540
And this thing means that you have next
link |
01:09:13.300
to this perceptual construction system
link |
01:09:15.340
and the motivational cybernetics,
link |
01:09:17.620
an agent that is paying attention
link |
01:09:20.220
to what it should select at any given moment
link |
01:09:22.700
to maximize reward.
link |
01:09:24.300
And this scanning system, this attention agent,
link |
01:09:27.460
is required for consciousness
link |
01:09:28.900
and consciousness is its control model.
link |
01:09:32.580
So it's the index memories that this thing retains
link |
01:09:36.140
when it manipulates the perceptual representations
link |
01:09:39.180
to maximize the value and minimize the conflicts
link |
01:09:43.020
and to increase coherence.
link |
01:09:44.820
So the purpose of consciousness is to create coherence
link |
01:09:47.740
in your perceptual representations,
link |
01:09:49.500
remove conflicts, predict the future,
link |
01:09:52.140
construct counterfactual representations
link |
01:09:54.100
so you can coordinate your actions and so on.
link |
01:09:57.460
And in order to do this, it needs to form memories.
link |
01:10:00.220
These memories are partial binding states
link |
01:10:02.340
of the working memory contents
link |
01:10:04.100
that are being revisited later on to backtrack,
link |
01:10:07.100
to undo certain states, to look for alternatives.
link |
01:10:10.140
And these index memories that you can recall,
link |
01:10:13.020
that is what you perceive as your stream of consciousness.
link |
01:10:15.940
And being able to recall these memories,
link |
01:10:17.860
this is what makes you conscious.
link |
01:10:19.420
If you could not remember what you paid attention to,
link |
01:10:21.660
you wouldn't be conscious.
link |
01:10:26.180
So consciousness is the index in the memory database.
link |
01:10:29.140
Okay.
link |
01:10:31.380
But let me sneak up to the questions of consciousness
link |
01:10:35.500
a little further.
link |
01:10:37.180
So we usually relate suffering to consciousness.
link |
01:10:42.660
So the capacity to suffer.
link |
01:10:46.260
I think to me, that's a really strong sign of consciousness
link |
01:10:49.700
is a thing that can suffer.
link |
01:10:52.460
How is that useful?
link |
01:10:55.100
Suffering.
link |
01:10:57.140
And like in your model where you just described,
link |
01:10:59.540
which is indexing of memories and what is the coherence
link |
01:11:03.740
with the perception, with this predictive thing
link |
01:11:07.220
that's going on in the perception,
link |
01:11:09.260
how does suffering relate to any of that?
link |
01:11:13.060
The higher level suffering that humans do.
link |
01:11:16.580
Basically pain is a reinforcement signal.
link |
01:11:20.020
Pain is a signal that one part of your brain
link |
01:11:23.380
sends to another part of your brain,
link |
01:11:25.140
or in an abstract sense, part of your mind
link |
01:11:27.940
sends to another part of the mind to regulate its behavior,
link |
01:11:30.860
to tell it the behavior that you're currently exhibiting
link |
01:11:33.540
should be improved.
link |
01:11:34.940
And this is the signal that I tell you to move away
link |
01:11:39.020
from what you're currently doing
link |
01:11:40.180
and push into a different direction.
link |
01:11:42.300
So pain gives you a part of you an impulse
link |
01:11:46.060
to do something differently.
link |
01:11:47.940
But sometimes this doesn't work
link |
01:11:49.940
because the training part of your brain
link |
01:11:52.140
is talking to the wrong region,
link |
01:11:54.220
or because it has the wrong model
link |
01:11:55.900
of the relationships in the world.
link |
01:11:57.180
Maybe you're mismodeling yourself
link |
01:11:58.580
or you're mismodeling the relationship of yourself
link |
01:12:00.860
to the world,
link |
01:12:01.700
or you're mismodeling the dynamics of the world.
link |
01:12:03.500
So you're trying to improve something
link |
01:12:04.940
that cannot be improved by generating more pain.
link |
01:12:07.900
But the system doesn't have any alternative.
link |
01:12:10.380
So it doesn't get better.
link |
01:12:12.340
What do you do if something doesn't get better
link |
01:12:14.220
and you want it to get better?
link |
01:12:15.540
You increase the strengths of the signal.
link |
01:12:17.940
And then the signal becomes chronic
link |
01:12:19.580
when it becomes permanent without a change inside.
link |
01:12:22.220
This is what we call suffering.
link |
01:12:24.300
And the purpose of consciousness
link |
01:12:26.420
is to deal with contradictions,
link |
01:12:28.180
with things that cannot be resolved.
link |
01:12:30.300
The purpose of consciousness,
link |
01:12:31.740
I think is similar to a conductor in an orchestra.
link |
01:12:35.060
When everything works well,
link |
01:12:36.420
the orchestra doesn't need much of a conductor
link |
01:12:38.580
as long as it's coherent.
link |
01:12:40.260
But when there is a lack of coherence
link |
01:12:42.020
or something is consistently producing
link |
01:12:44.340
disharmony and mismatches,
link |
01:12:46.220
then the conductor becomes alert and interacts with it.
link |
01:12:48.980
So suffering attracts the activity of our consciousness.
link |
01:12:52.660
And the purpose of that is ideally
link |
01:12:54.740
that we bring new layers online,
link |
01:12:56.660
new layers of modeling that are able to create
link |
01:13:00.460
a model of the dysregulation so we can deal with it.
link |
01:13:04.500
And this means that we typically get
link |
01:13:06.860
higher level consciousness, so to speak, right?
link |
01:13:08.820
We get some consciousness above our pay grade maybe
link |
01:13:11.420
if we have some suffering early in our life.
link |
01:13:13.260
Most of the interesting people
link |
01:13:14.820
had trauma early on in their childhood.
link |
01:13:17.060
And trauma means that you are suffering an injury
link |
01:13:20.940
for which the system is not prepared,
link |
01:13:23.060
which it cannot deal with,
link |
01:13:24.380
which it cannot insulate itself from.
link |
01:13:26.260
So something breaks.
link |
01:13:27.940
And this means that the behavior of the system
link |
01:13:29.860
is permanently disturbed in a way
link |
01:13:34.020
that some mismatch exists now in the regulation
link |
01:13:37.500
that just by following your impulses,
link |
01:13:39.100
by following the pain in the direction where it hurts,
link |
01:13:41.860
the situation doesn't improve but get worse.
link |
01:13:44.380
And so what needs to happen is that you grow up.
link |
01:13:47.940
And that's part that has grown up
link |
01:13:49.540
is able to deal with the part
link |
01:13:51.180
that is stuck in this earlier phase.
link |
01:13:53.340
Yeah, so at least to grow,
link |
01:13:54.580
so you're adding extra layers to your cognition.
link |
01:13:58.060
And let me ask you then,
link |
01:14:00.380
because I gotta stick on suffering,
link |
01:14:02.380
the ethics of the whole thing.
link |
01:14:05.420
So not our consciousness, but the consciousness of others.
link |
01:14:08.900
You've tweeted, one of my biggest fears
link |
01:14:13.380
is that insects could be conscious.
link |
01:14:16.260
The amount of suffering on earth would be unthinkable.
link |
01:14:20.300
So when we think of other conscious beings,
link |
01:14:24.380
is suffering a property of consciousness
link |
01:14:30.300
that we're most concerned about?
link |
01:14:32.660
So I'm still thinking about robots,
link |
01:14:40.140
how to make sense of other nonhuman things
link |
01:14:44.540
that appear to have the depth of experience
link |
01:14:48.380
that humans have.
link |
01:14:50.700
And to me, that means consciousness
link |
01:14:54.020
and the darkest side of that, which is suffering,
link |
01:14:57.460
the capacity to suffer.
link |
01:15:00.380
And so I started thinking,
link |
01:15:02.420
how much responsibility do we have
link |
01:15:04.100
for those other conscious beings?
link |
01:15:06.620
That's where the definition of consciousness
link |
01:15:10.980
becomes most urgent.
link |
01:15:13.100
Like having to come up with a definition of consciousness
link |
01:15:15.140
becomes most urgent,
link |
01:15:16.900
is who should we and should we not be torturing?
link |
01:15:24.740
There's no general answer to this.
link |
01:15:26.300
Was Genghis Khan doing anything wrong?
link |
01:15:29.100
It depends right on how you look at it.
link |
01:15:31.900
Well, he drew a line somewhere
link |
01:15:36.300
where this is us and that's them.
link |
01:15:38.820
It's the circle of empathy.
link |
01:15:40.820
It's like these,
link |
01:15:42.860
you don't have to use the word consciousness,
link |
01:15:44.820
but these are the things that matter to me
link |
01:15:48.980
if they suffer or not.
link |
01:15:50.100
And these are the things that don't matter to him.
link |
01:15:52.340
Yeah, but when one of his commanders failed him,
link |
01:15:54.580
he broke his spine and let him die in a horrible way.
link |
01:15:59.140
And so in some sense,
link |
01:16:01.420
I think he was indifferent to suffering
link |
01:16:03.860
or he was not different in the sense
link |
01:16:05.820
that he didn't see it as useful if he inflicted suffering,
link |
01:16:10.380
but he did not see it as something that had to be avoided.
link |
01:16:14.100
That was not the goal.
link |
01:16:15.460
The question was, how can I use suffering
link |
01:16:18.860
and the infliction of suffering to reach my goals
link |
01:16:21.260
from his perspective?
link |
01:16:23.900
I see.
link |
01:16:24.740
So like different societies throughout history
link |
01:16:26.700
put different value on the...
link |
01:16:29.940
Different individuals, different psyches.
link |
01:16:31.580
But also even the objective of avoiding suffering,
link |
01:16:35.100
like some societies probably,
link |
01:16:37.540
I mean, this is where like religious belief really helps
link |
01:16:40.740
that afterlife, that it doesn't matter
link |
01:16:44.700
that you suffer or die,
link |
01:16:45.980
what matters is you suffer honorably, right?
link |
01:16:49.300
So that you enter the afterlife as a hero.
link |
01:16:52.260
It seems to be superstitious to me,
link |
01:16:53.860
basically beliefs that assert things
link |
01:16:57.580
for which no evidence exists
link |
01:17:00.020
are incompatible with sound epistemology.
link |
01:17:02.180
And I don't think that religion has to be superstitious,
link |
01:17:04.620
otherwise it should be condemned in all cases.
link |
01:17:06.860
You're somebody who's saying we live in a dream world,
link |
01:17:09.140
we have zero evidence for anything.
link |
01:17:11.340
So...
link |
01:17:12.180
That's not the case.
link |
01:17:13.500
There are limits to what languages can be constructed.
link |
01:17:16.060
Mathematics brings solid evidence for its own structure.
link |
01:17:19.500
And once we have some idea of what languages exist
link |
01:17:23.260
and how a system can learn
link |
01:17:24.460
and what learning itself is in the first place.
link |
01:17:26.580
And so we can begin to realize that our intuitions
link |
01:17:31.900
that we are able to learn about the regularities
link |
01:17:34.620
of the world and minimize surprise
link |
01:17:36.300
and understand the nature of our own agency
link |
01:17:38.900
to some degree of abstraction.
link |
01:17:40.660
That's not an illusion.
link |
01:17:42.140
So it's a useful approximation.
link |
01:17:44.140
Just because we live in a dream world
link |
01:17:46.860
doesn't mean mathematics can't give us a consistent glimpse
link |
01:17:51.780
of physical, of objective reality.
link |
01:17:54.940
We can basically distinguish useful encodings
link |
01:17:57.340
from useless encodings.
link |
01:17:58.980
And when we apply our truth seeking to the world,
link |
01:18:03.460
we know we usually cannot find out
link |
01:18:05.460
whether a certain thing is true.
link |
01:18:07.460
What we typically do is we take the state vector
link |
01:18:10.060
of the universe separated into separate objects
link |
01:18:12.100
that interact with each other through interfaces.
link |
01:18:14.420
And this distinction that we are making
link |
01:18:16.180
is not completely arbitrary.
link |
01:18:17.420
It's done to optimize the compression
link |
01:18:21.140
that we can apply to our models of the universe.
link |
01:18:23.380
So we can predict what's happening
link |
01:18:25.660
with our limited resources.
link |
01:18:27.300
In this sense, it's not arbitrary.
link |
01:18:29.220
But the separation of the world into objects
link |
01:18:32.020
that are somehow discrete and interacting with each other
link |
01:18:34.940
is not the true reality, right?
link |
01:18:36.900
The boundaries between the objects
link |
01:18:38.420
are projected into the world, not arbitrarily projected.
link |
01:18:41.660
But still, it's only an approximation
link |
01:18:44.020
of what's actually the case.
link |
01:18:46.460
And we sometimes notice that we run into contradictions
link |
01:18:48.980
when we try to understand high level things
link |
01:18:50.980
like economic aspects of the world
link |
01:18:53.100
and so on, or political aspects, or psychological aspects
link |
01:18:56.980
where we make simplifications.
link |
01:18:58.300
And the objects that we are using to separate the world
link |
01:19:00.780
are just one of many possible projections
link |
01:19:03.100
of what's going on.
link |
01:19:04.820
So it's not, in this postmodernist sense,
link |
01:19:07.180
completely arbitrary, and you're free to pick
link |
01:19:09.260
what you want or dismiss what you don't like
link |
01:19:11.100
because it's all stories.
link |
01:19:12.260
No, that's not true.
link |
01:19:13.660
You have to show for every model
link |
01:19:15.380
of how well it predicts the world.
link |
01:19:17.340
So the confidence that you should have
link |
01:19:19.220
in the entities of your models
link |
01:19:21.020
should correspond to the evidence that you have.
link |
01:19:24.460
Can I ask you on a small tangent
link |
01:19:27.660
to talk about your favorite set of ideas and people,
link |
01:19:32.660
which is postmodernism.
link |
01:19:35.180
What?
link |
01:19:37.900
What is postmodernism?
link |
01:19:39.980
How would you define it?
link |
01:19:40.980
And why to you is it not a useful framework of thought?
link |
01:19:48.860
Postmodernism is something that I'm really not an expert on.
link |
01:19:52.340
And postmodernism is a set of philosophical ideas
link |
01:19:57.340
that is difficult to lump together,
link |
01:19:58.980
that is characterized by some useful thinkers,
link |
01:20:01.980
some of them poststructuralists and so on.
link |
01:20:04.180
And I'm mostly not interested in it
link |
01:20:05.740
because I think that it's not leading me anywhere
link |
01:20:08.660
that I find particularly useful.
link |
01:20:11.340
It's mostly, I think, born out of the insight
link |
01:20:13.820
that the ontologies that we impose on the world
link |
01:20:17.380
are not literally true.
link |
01:20:18.780
And that we can often get to a different interpretation
link |
01:20:20.980
by the world by using a different ontology
link |
01:20:22.780
that is different separation of the world
link |
01:20:25.060
into interacting objects.
link |
01:20:26.540
But the idea that this makes the world a set of stories
link |
01:20:30.860
that are arbitrary, I think, is wrong.
link |
01:20:33.180
And the people that are engaging in this type of philosophy
link |
01:20:37.540
are working in an area that I largely don't find productive.
link |
01:20:40.900
There's nothing useful coming out of this.
link |
01:20:43.060
So this idea that truth is relative
link |
01:20:45.060
is not something that has, in some sense,
link |
01:20:46.980
informed physics or theory of relativity.
link |
01:20:49.620
And there is no feedback between those.
link |
01:20:51.540
There is no meaningful information
link |
01:20:54.020
of this type of philosophy on the sciences
link |
01:20:56.860
or on engineering or in politics.
link |
01:20:59.340
But there is a very strong information on ideology
link |
01:21:04.820
because it basically has become an ideology
link |
01:21:07.620
that is justifying itself by the notion
link |
01:21:11.100
that truth is a relative concept.
link |
01:21:13.420
And it's not being used in such a way
link |
01:21:15.620
that the philosophers or sociologists
link |
01:21:18.540
that take up these ideas say,
link |
01:21:20.340
oh, I should doubt my own ideas because maybe my separation of the world
link |
01:21:24.140
into objects is not completely valid.
link |
01:21:25.740
And I should maybe use a different one
link |
01:21:27.580
and be open to a pluralism of ideas.
link |
01:21:30.460
But it mostly exists to dismiss the ideas of other people.
link |
01:21:34.300
It becomes, yeah, it becomes a political weapon of sorts
link |
01:21:37.540
to achieve power.
link |
01:21:39.220
Basically, there's nothing wrong, I think,
link |
01:21:42.580
with developing a philosophy around this.
link |
01:21:46.060
But to develop a philosophy around this,
link |
01:21:49.180
to develop norms around the idea
link |
01:21:51.820
that truth is something that is completely negotiable,
link |
01:21:54.940
is incompatible with the scientific project.
link |
01:21:57.860
And I think if the academia has no defense
link |
01:22:02.140
against the ideological parts of the postmodernist movement,
link |
01:22:06.740
it's doomed.
link |
01:22:07.860
Right, you have to acknowledge the ideological part
link |
01:22:11.740
of any movement, actually, including postmodernism.
link |
01:22:15.260
Well, the question is what an ideology is.
link |
01:22:17.500
And to me, an ideology is basically a viral memeplex
link |
01:22:21.060
that is changing your mind in such a way that reality gets warped.
link |
01:22:25.980
It gets warped in such a way that you're being cut off
link |
01:22:28.180
from the rest of human thought space.
link |
01:22:29.540
And you cannot consider things outside of the range of ideas
link |
01:22:33.420
of your own ideology as possibly true.
link |
01:22:35.780
Right, so, I mean, there's certain properties to an ideology
link |
01:22:38.380
that make it harmful.
link |
01:22:39.340
One of them is that dogmatism of just certainty,
link |
01:22:44.060
dogged certainty in that you're right,
link |
01:22:46.660
you have the truth, and nobody else does.
link |
01:22:48.780
Yeah, but what is creating the certainty?
link |
01:22:50.220
It's very interesting to look at the type of model
link |
01:22:53.060
that is being produced.
link |
01:22:54.100
Is it basically just a strong prior, and you tell people,
link |
01:22:57.500
oh, this idea that you consider to be very true,
link |
01:22:59.980
the evidence for this is actually just much weaker
link |
01:23:02.220
than you thought, and look, here are some studies.
link |
01:23:04.380
No, this is not how it works.
link |
01:23:06.100
It's usually normative, which means some thoughts
link |
01:23:09.260
are unthinkable because they would change your identity
link |
01:23:13.780
into something that is no longer acceptable.
link |
01:23:17.100
And this cuts you off from considering an alternative.
link |
01:23:20.100
And many de facto religions use this trick
link |
01:23:23.220
to lock people into a certain mode of thought,
link |
01:23:25.700
and this removes agency over your own thoughts.
link |
01:23:27.700
And it's very ugly to me.
link |
01:23:28.660
It's basically not just a process of domestication,
link |
01:23:32.580
but it's actually an intellectual castration
link |
01:23:35.180
that happens.
link |
01:23:36.220
It's an inability to think creatively
link |
01:23:39.140
and to bring forth new thoughts.
link |
01:23:40.820
I can ask you about substances, chemical substances
link |
01:23:48.300
that affect the video game, the dream world.
link |
01:23:53.140
So psychedelics that increasingly have been getting
link |
01:23:57.140
a lot of research done on them.
link |
01:23:58.820
So in general, psychedelics, psilocybin, MDMA,
link |
01:24:02.620
but also a really interesting one, the big one, which is DMT.
link |
01:24:06.300
What and where are the places that these substances
link |
01:24:10.820
take the mind that is operating in the dream world?
link |
01:24:16.620
Do you have an interesting sense how this throws a wrinkle
link |
01:24:20.340
into the prediction model?
link |
01:24:22.260
Is it just some weird little quirk
link |
01:24:24.500
or is there some fundamental expansion
link |
01:24:27.820
of the mind going on?
link |
01:24:31.700
I suspect that a way to look at psychedelics
link |
01:24:34.140
is that they induce particular types
link |
01:24:36.420
of lucid dreaming states.
link |
01:24:38.540
So it's a state in which certain connections
link |
01:24:41.620
are being severed in your mind.
link |
01:24:43.820
They're no longer active.
link |
01:24:45.340
Your mind basically gets free to move in a certain direction
link |
01:24:48.860
because some inhibition, some particular inhibition
link |
01:24:51.060
doesn't work anymore.
link |
01:24:52.740
And as a result, you might stop having a self
link |
01:24:55.340
or you might stop perceiving the world as three dimensional.
link |
01:25:00.340
And you can explore that state.
link |
01:25:04.500
And I suppose that for every state
link |
01:25:06.780
that can be induced with psychedelics,
link |
01:25:08.300
there are people that are naturally in that state.
link |
01:25:10.980
So sometimes psychedelics to shift you
link |
01:25:13.340
through a range of possible mental states.
link |
01:25:15.860
And they can also shift you out of the range
link |
01:25:17.820
of permissible mental states
link |
01:25:19.100
that is where you can make predictive models of reality.
link |
01:25:22.620
And what I observe in people that use psychedelics a lot
link |
01:25:26.980
is that they tend to be overfitting.
link |
01:25:29.580
Overfitting means that you are using more bits
link |
01:25:34.540
for modeling the dynamics of a function than you should.
link |
01:25:38.060
And so you can fit your curve
link |
01:25:40.220
to extremely detailed things in the past,
link |
01:25:42.780
but this model is no longer predictive for the future.
link |
01:25:45.860
What is it about psychedelics that forces that?
link |
01:25:49.660
I thought it would be the opposite.
link |
01:25:51.060
I thought that it's a good mechanism
link |
01:25:54.860
for generalization, for regularization.
link |
01:25:59.300
So it feels like psychedelics expansion of the mind,
link |
01:26:03.220
like taking you outside of,
link |
01:26:04.820
like forcing your model to be non predictive
link |
01:26:08.820
is a good thing.
link |
01:26:11.180
Meaning like, it's almost like, okay,
link |
01:26:14.340
what I would say psychedelics are akin to
link |
01:26:16.700
is traveling to a totally different environment.
link |
01:26:19.820
Like going, if you've never been to like India
link |
01:26:21.980
or something like that from the United States,
link |
01:26:24.220
very different set of people, different culture,
link |
01:26:26.180
different food, different roads and values
link |
01:26:30.340
and all those kinds of things.
link |
01:26:31.420
Yeah, so psychedelics can, for instance,
link |
01:26:33.540
teleport people into a universe that is hyperbolic,
link |
01:26:37.820
which means that if you imagine a room that you're in,
link |
01:26:41.300
you can turn around 360 degrees
link |
01:26:43.580
and you didn't go full circle.
link |
01:26:44.660
You need to go 720 degrees to go full circle.
link |
01:26:47.180
Exactly.
link |
01:26:48.020
So the things that people learn in that state
link |
01:26:50.820
cannot be easily transferred
link |
01:26:52.180
in this universe that we are in.
link |
01:26:54.260
It could be that if they're able to abstract
link |
01:26:56.420
and understand what happened to them,
link |
01:26:58.260
that they understand that some part
link |
01:27:00.300
of their spatial cognition has been desynchronized
link |
01:27:03.500
and has found a different synchronization.
link |
01:27:05.660
And this different synchronization
link |
01:27:06.900
happens to be a hyperbolic one, right?
link |
01:27:08.620
So you learn something interesting about your brain.
link |
01:27:10.980
It's difficult to understand what exactly happened,
link |
01:27:13.140
but we get a pretty good idea
link |
01:27:14.580
once we understand how the brain is representing geometry.
link |
01:27:17.700
Yeah, but doesn't it give you a fresh perspective
link |
01:27:20.180
on the physical reality?
link |
01:27:26.060
Who's making that sound?
link |
01:27:27.780
Is it inside my head or is it external?
link |
01:27:30.980
Well, there is no sound outside of your mind,
link |
01:27:33.180
but it's making sense of phenomenon physics.
link |
01:27:39.660
Yeah, in the physical reality, there's sound waves
link |
01:27:44.780
traveling through air.
link |
01:27:45.860
Okay.
link |
01:27:47.020
That's our model of what happened.
link |
01:27:48.580
That's our model of what happened, right.
link |
01:27:53.380
Doesn't Psychedelics give you a fresh perspective
link |
01:27:57.220
on this physical reality?
link |
01:27:59.100
Like, not this physical reality, but this more...
link |
01:28:05.980
What do you call the dream world that's mapped directly to...
link |
01:28:09.780
The purpose of dreaming at night, I think,
link |
01:28:11.580
is data augmentation.
link |
01:28:13.660
Exactly.
link |
01:28:14.900
So that's very different.
link |
01:28:16.300
That's very similar to Psychedelics.
link |
01:28:18.780
It's changed parameters about the things that you have learned.
link |
01:28:21.660
And, for instance, when you are young,
link |
01:28:24.140
you have seen things from certain perspectives,
link |
01:28:26.060
but not from others.
link |
01:28:27.300
So your brain is generating new perspectives of objects
link |
01:28:30.180
that you already know,
link |
01:28:31.540
which means you can learn to recognize them later
link |
01:28:34.100
from different perspectives.
link |
01:28:35.180
And I suspect that's the reason that many of us
link |
01:28:37.660
remember to have flying dreams as children,
link |
01:28:39.700
because it's just different perspectives of the world
link |
01:28:41.700
that you already know,
link |
01:28:43.020
and that it starts to generate these different
link |
01:28:46.540
perspective changes,
link |
01:28:47.860
and then it fluidly turns this into a flying dream
link |
01:28:50.540
to make sense of what's happening, right?
link |
01:28:52.260
So you fill in the gaps,
link |
01:28:53.620
and suddenly you see yourself flying.
link |
01:28:55.860
And similar things can happen with semantic relationships.
link |
01:28:58.820
So it's not just spatial relationships,
link |
01:29:00.580
but it can also be the relationships between ideas
link |
01:29:03.420
that are being changed.
link |
01:29:05.180
And it seems that the mechanisms that make that happen
link |
01:29:08.140
during dreaming are interacting
link |
01:29:12.060
with these same receptors
link |
01:29:14.300
that are being stimulated by psychedelics.
link |
01:29:17.220
So I suspect that there is a thing
link |
01:29:19.780
that I haven't read really about.
link |
01:29:22.020
The way in which dreams are induced in the brain
link |
01:29:24.380
is not just that the activity of the brain gets tuned down
link |
01:29:28.500
because your eyes are closed
link |
01:29:30.620
and you no longer get enough data from your eyes,
link |
01:29:33.980
but there is a particular type of neurotransmitter
link |
01:29:37.180
that is saturating your brain during these phases,
link |
01:29:40.140
during the REM phases, and you produce
link |
01:29:42.980
controlled hallucinations.
link |
01:29:44.740
And psychedelics are linking into these mechanisms,
link |
01:29:48.700
I suspect.
link |
01:29:49.860
So isn't that another trickier form of data augmentation?
link |
01:29:54.060
Yes, but it's also data augmentation
link |
01:29:57.740
that can happen outside of the specification
link |
01:29:59.860
that your brain is tuned to.
link |
01:30:00.940
So basically people are overclocking their brains
link |
01:30:03.420
and that produces states
link |
01:30:05.780
that are subjectively extremely interesting.
link |
01:30:09.260
Yeah, I just.
link |
01:30:10.540
But from the outside, very suspicious.
link |
01:30:12.860
So I think I'm over applying the metaphor
link |
01:30:15.660
of a neural network in my own mind,
link |
01:30:17.860
which I just think that doesn't lead to overfitting, right?
link |
01:30:22.460
But you were just sort of anecdotally saying
link |
01:30:26.380
my experiences with people that have done psychedelics
link |
01:30:28.660
are that kind of quality.
link |
01:30:30.460
I think it typically happens.
link |
01:30:31.580
So if you look at people like Timothy Leary,
link |
01:30:34.420
and he has written beautiful manifestos
link |
01:30:36.700
about the effect of LSD on people.
link |
01:30:40.220
He genuinely believed, he writes in these manifestos,
link |
01:30:42.820
that in the future, science and art
link |
01:30:44.860
will only be done on psychedelics
link |
01:30:46.300
because it's so much more efficient and so much better.
link |
01:30:49.020
And he gave LSD to children in this community
link |
01:30:52.660
of a few thousand people that he had near San Francisco.
link |
01:30:55.780
And basically he was losing touch with reality.
link |
01:31:00.540
He did not understand the effects
link |
01:31:02.220
that the things that he was doing
link |
01:31:04.260
would have on the reception of psychedelics
link |
01:31:06.620
by society because he was unable to think critically
link |
01:31:09.900
about what happened.
link |
01:31:10.740
What happened was that he got in a euphoric state,
link |
01:31:13.500
that euphoric state happened because he was overfitting.
link |
01:31:16.620
He was taking this sense of euphoria
link |
01:31:19.460
and translating it into a model
link |
01:31:21.500
of actual success in the world, right?
link |
01:31:23.660
He was feeling better.
link |
01:31:25.500
Limitations had disappeared,
link |
01:31:26.940
that he experienced to be existing,
link |
01:31:29.580
but he didn't get superpowers.
link |
01:31:30.740
I understand what you mean by overfitting now.
link |
01:31:33.860
There's a lot of interpretation to the term
link |
01:31:36.020
overfitting in this case, but I got you.
link |
01:31:38.660
So he was getting positive rewards
link |
01:31:42.740
from a lot of actions that he shouldn't have been doing.
link |
01:31:44.220
Yeah, but not just this.
link |
01:31:45.060
So if you take, for instance, John Lilly,
link |
01:31:46.620
who was studying dolphin languages and aliens and so on,
link |
01:31:52.140
a lot of people that use psychedelics became very loopy.
link |
01:31:55.900
And the typical thing that you notice
link |
01:31:58.700
when people are on psychedelics is that they are in a state
link |
01:32:00.940
where they feel that everything can be explained now.
link |
01:32:03.660
Everything is clear, everything is obvious.
link |
01:32:06.620
And sometimes they have indeed discovered
link |
01:32:09.660
a useful connection, but not always.
link |
01:32:12.060
Very often these connections are overinterpretations.
link |
01:32:15.380
I wonder, you know, there's a question
link |
01:32:17.740
of correlation versus causation.
link |
01:32:21.060
And also I wonder if it's the psychedelics
link |
01:32:23.340
or if it's more the social, like being the outsider
link |
01:32:28.580
and having a strong community of outside
link |
01:32:31.140
and having a leadership position in an outsider cult
link |
01:32:34.300
like community that could have a much stronger effect
link |
01:32:37.420
of overfitting than do psychedelics themselves,
link |
01:32:39.940
the actual substances, because it's a counterculture thing.
link |
01:32:43.340
So it could be that as opposed to the actual substance.
link |
01:32:46.540
If you're a boring person who wears a suit and tie
link |
01:32:49.700
and works at a bank and takes psychedelics,
link |
01:32:53.220
that could be a very different effect
link |
01:32:55.140
of psychedelics on your mind.
link |
01:32:57.820
I'm just sort of raising the point
link |
01:32:59.660
that the people you referenced are already weirdos.
link |
01:33:02.860
I'm not sure exactly.
link |
01:33:04.180
No, not necessarily.
link |
01:33:05.220
A lot of the people that tell me
link |
01:33:07.540
that they use psychedelics in a useful way
link |
01:33:10.980
started out as squares and were liberating themselves
link |
01:33:14.580
because they were stuck.
link |
01:33:16.060
They were basically stuck in local optimum
link |
01:33:17.980
of their own self model, of their relationship to the world.
link |
01:33:20.980
And suddenly they had data augmentation.
link |
01:33:23.180
They basically saw and experienced a space of possibilities.
link |
01:33:26.740
They experienced what it would be like to be another person.
link |
01:33:29.540
And they took important lessons
link |
01:33:32.260
from that experience back home.
link |
01:33:36.660
Yeah, I mean, I love the metaphor of data augmentation
link |
01:33:40.660
because that's been the primary driver
link |
01:33:44.900
of self supervised learning in the computer vision domain
link |
01:33:48.900
is data augmentation.
link |
01:33:50.100
So it's funny to think of data augmentation,
link |
01:33:53.100
like chemically induced data augmentation in the human mind.
link |
01:33:58.100
There's also a very interesting effect that I noticed.
link |
01:34:02.100
I know several people who are sphere to me
link |
01:34:06.100
that LSD has cured their migraines.
link |
01:34:09.780
So severe cluster headaches or migraines
link |
01:34:13.060
that didn't respond to standard medication
link |
01:34:15.900
that disappeared after a single dose.
link |
01:34:18.220
And I don't recommend anybody doing this,
link |
01:34:20.820
especially not in the US where it's illegal.
link |
01:34:23.260
And there are no studies on this for that reason.
link |
01:34:26.340
But it seems that anecdotally
link |
01:34:28.900
that it basically can reset the serotonergic system.
link |
01:34:33.420
So it's basically pushing them
link |
01:34:36.380
outside of their normal boundaries.
link |
01:34:38.220
And as a result, it needs to find a new equilibrium.
link |
01:34:41.020
And in some people that equilibrium is better,
link |
01:34:43.300
but it also follows that in other people it might be worse.
link |
01:34:46.260
So if you have a brain that is already teetering
link |
01:34:49.500
on the boundary to psychosis,
link |
01:34:51.980
it can be permanently pushed over that boundary.
link |
01:34:54.740
Well, that's why you have to do good science,
link |
01:34:56.540
which they're starting to do on all these different
link |
01:34:58.340
substances of how well it actually works
link |
01:34:59.940
for the different conditions like MDMA seems to help
link |
01:35:02.660
with PTSD, same with psilocybin.
link |
01:35:05.580
You need to do good science,
link |
01:35:08.340
meaning large studies of large N.
link |
01:35:10.860
Yeah, so based on the existing studies of MDMA,
link |
01:35:14.060
it seems that if you look at Rick Doblin's work
link |
01:35:17.460
and what he has published about this and talks about,
link |
01:35:20.780
MDMA seems to be a psychologically relatively safe drug.
link |
01:35:24.180
But it's physiologically not very safe.
link |
01:35:26.740
That is, there is neurotoxicity
link |
01:35:30.060
if you would use a too large dose.
link |
01:35:31.820
And if you combine this with alcohol,
link |
01:35:34.420
which a lot of kids do in party settings during raves
link |
01:35:37.540
and so on, it's very hepatotoxic.
link |
01:35:40.260
So basically you can kill your liver.
link |
01:35:42.220
And this means that it's probably something that is best
link |
01:35:45.340
and most productively used in a clinical setting
link |
01:35:48.340
by people who really know what they're doing.
link |
01:35:50.020
And I suspect that's also true for the other psychedelics
link |
01:35:53.580
that is while the other psychedelics are probably not
link |
01:35:56.620
as toxic as say alcohol,
link |
01:35:59.460
the effects on the psyche can be much more profound
link |
01:36:02.220
and lasting.
link |
01:36:03.460
Yeah, well, as far as I know psilocybin,
link |
01:36:05.940
so mushrooms, magic mushrooms,
link |
01:36:09.020
as far as I know in terms of the studies they're running,
link |
01:36:11.780
I think have no, like they're allowed to do
link |
01:36:15.060
what they're calling heroic doses.
link |
01:36:17.060
So that one does not have a toxicity.
link |
01:36:18.980
So they could do like huge doses in a clinical setting
link |
01:36:21.700
when they're doing study on psilocybin,
link |
01:36:23.620
which is kind of fun.
link |
01:36:25.140
Yeah, it seems that most of the psychedelics
link |
01:36:27.100
work in extremely small doses,
link |
01:36:29.300
which means that the effect on the rest of the body
link |
01:36:32.180
is relatively low.
link |
01:36:33.660
And MDMA is probably the exception.
link |
01:36:36.180
Maybe ketamine can be dangerous in larger doses
link |
01:36:38.300
because it can depress breathing and so on.
link |
01:36:41.260
But the LSD and psilocybin work in very, very small doses,
link |
01:36:45.940
at least the active part of them,
link |
01:36:47.820
of psilocybin LSD is only the active part.
link |
01:36:50.580
And the, but the effect that it can have
link |
01:36:54.100
on your mental wiring can be very dangerous, I think.
link |
01:36:57.940
Let's talk about AI a little bit.
link |
01:37:00.540
What are your thoughts about GPT3 and language models
link |
01:37:05.300
trained with self supervised learning?
link |
01:37:09.980
It came out quite a bit ago,
link |
01:37:11.420
but I wanted to get your thoughts on it.
link |
01:37:13.180
Yeah.
link |
01:37:14.580
In the nineties, I was in New Zealand
link |
01:37:16.900
and I had an amazing professor, Ian Witten,
link |
01:37:21.140
who realized I was bored in class and put me in his lab.
link |
01:37:25.220
And he gave me the task to discover grammatical structure
link |
01:37:28.780
in an unknown language.
link |
01:37:31.420
And the unknown language that I picked was English
link |
01:37:33.740
because it was the easiest one
link |
01:37:35.100
to find a corpus for construct one.
link |
01:37:37.980
And he gave me the largest computer at the whole university.
link |
01:37:41.980
It had two gigabytes of RAM, which was amazing.
link |
01:37:44.140
And I wrote everything in C
link |
01:37:45.380
with some in memory compression to do statistics
link |
01:37:47.740
over the language.
link |
01:37:49.340
And I first would create a dictionary of all the words,
link |
01:37:53.900
which basically tokenizes everything and compresses things
link |
01:37:57.300
so that I don't need to store the whole word,
link |
01:37:58.820
but just a code for every word.
link |
01:38:02.300
And then I was taking this all apart in sentences
link |
01:38:05.900
and I was trying to find all the relationships
link |
01:38:09.140
between all the words in the sentences
link |
01:38:10.860
and do statistics over them.
link |
01:38:12.940
And that proved to be impossible
link |
01:38:15.180
because the complexity is just too large.
link |
01:38:18.020
So if you want to discover the relationship
link |
01:38:20.460
between an article and a noun,
link |
01:38:21.860
and there are three adjectives in between,
link |
01:38:23.860
you cannot do ngram statistics
link |
01:38:25.420
and look at all the possibilities that can exist,
link |
01:38:28.060
at least not with the resources that we had back then.
link |
01:38:30.740
So I realized I need to make some statistics
link |
01:38:33.300
over what I need to make statistics over.
link |
01:38:35.220
So I wrote something that was pretty much a hack
link |
01:38:38.620
that did this for at least first order relationships.
link |
01:38:42.380
And I came up with some kind of mutual information graph
link |
01:38:45.100
that was indeed discovering something that looks exactly
link |
01:38:48.380
like the grammatical structure of the sentence,
link |
01:38:50.500
just by trying to encode the sentence
link |
01:38:52.660
in such a way that the words would be written
link |
01:38:54.740
in the optimal order inside of the model.
link |
01:38:58.100
And what I also found is that if we would be able
link |
01:39:02.140
to increase the resolution of that
link |
01:39:03.820
and not just use this model
link |
01:39:06.620
to reproduce grammatically correct sentences,
link |
01:39:09.060
we would also be able
link |
01:39:09.980
to correct stylistically correct sentences
link |
01:39:12.020
by just having more bits in these relationships.
link |
01:39:14.580
And if we wanted to have meaning,
link |
01:39:16.300
we would have to go much higher order.
link |
01:39:18.740
And I didn't know how to make higher order models back then
link |
01:39:21.460
without spending way more years in research
link |
01:39:23.860
on how to make the statistics
link |
01:39:25.580
over what we need to make statistics over.
link |
01:39:28.660
And this thing that we cannot look at the relationships
link |
01:39:31.540
between all the bits in your input is being solved
link |
01:39:34.020
in different domains in different ways.
link |
01:39:35.780
So in computer graphics, computer vision,
link |
01:39:39.380
standard methods for many years now
link |
01:39:41.380
is convolutional neural networks.
link |
01:39:43.620
Convolutional neural networks are hierarchies of filters
link |
01:39:46.620
that exploit the fact that neighboring pixels
link |
01:39:48.980
in images are usually semantically related
link |
01:39:51.100
and distance pixels in images
link |
01:39:53.020
are usually not semantically related.
link |
01:39:55.500
So you can just by grouping the pixels
link |
01:39:57.700
that are next to each other,
link |
01:39:59.100
hierarchically together reconstruct the shape of objects.
link |
01:40:02.780
And this is an important prior
link |
01:40:04.620
that we built into these models
link |
01:40:06.140
so they can converge quickly.
link |
01:40:08.460
But this doesn't work in language
link |
01:40:09.820
for the reason that adjacent words are often
link |
01:40:12.940
but not always related and distant words
link |
01:40:14.980
are sometimes related while the words in between are not.
link |
01:40:19.380
So how can you learn the topology of language?
link |
01:40:22.660
And I think for this reason that this difficulty existed,
link |
01:40:26.460
the transformer was invented
link |
01:40:28.700
in natural language processing, not in vision.
link |
01:40:32.780
And what the transformer is doing,
link |
01:40:34.900
it's a hierarchy of layers where every layer learns
link |
01:40:38.420
what to pay attention to in the given context
link |
01:40:40.900
in the previous layer.
link |
01:40:43.220
So what to make the statistics over.
link |
01:40:46.020
And the context is significantly larger
link |
01:40:49.980
than the adjacent word.
link |
01:40:51.540
Yes, so the context that GPT3 has been using,
link |
01:40:55.980
the transformer itself is from 2017
link |
01:40:58.500
and it wasn't using that large of a context.
link |
01:41:02.180
OpenAI has basically scaled up this idea
link |
01:41:05.060
as far as they could at the time.
link |
01:41:06.940
And the context is about 2048 symbols,
link |
01:41:11.300
tokens in the language.
link |
01:41:12.860
These symbols are not characters,
link |
01:41:15.060
but they take the words and project them
link |
01:41:17.020
into a vector space where words
link |
01:41:20.100
that are statistically co occurring a lot
link |
01:41:22.020
are neighbors already.
link |
01:41:23.220
So it's already a simplification
link |
01:41:24.780
of the problem a little bit.
link |
01:41:26.580
And so every word is basically a set of coordinates
link |
01:41:29.260
in a high dimensional space.
link |
01:41:31.060
And then they use some kind of trick
link |
01:41:33.100
to also encode the order of the words in a sentence
link |
01:41:36.340
or in the not just sentence,
link |
01:41:37.820
but 2048 tokens is about a couple of pages of text
link |
01:41:41.780
or two and a half pages of text.
link |
01:41:43.620
And so they managed to do pretty exhaustive statistics
link |
01:41:46.860
over the potential relationships
link |
01:41:49.140
between two pages of text, which is tremendous.
link |
01:41:51.740
I was just using a single sentence back then.
link |
01:41:55.020
And I was only looking for first order relationships.
link |
01:41:58.780
And they were really looking
link |
01:42:00.380
for much, much higher level relationships.
link |
01:42:02.740
And what they discover after they fed this
link |
01:42:05.300
with an enormous amount of training,
link |
01:42:06.580
they are pretty much the written internet
link |
01:42:08.980
or a subset of it that had some quality,
link |
01:42:12.140
but substantial portion of the common core
link |
01:42:15.180
that they're not only able to reproduce style,
link |
01:42:18.180
but they're also able to reproduce
link |
01:42:19.860
some pretty detailed semantics,
link |
01:42:21.660
like being able to add three digit numbers
link |
01:42:24.700
and multiply two digit numbers
link |
01:42:26.220
or to translate between programming languages
link |
01:42:28.820
and things like that.
link |
01:42:30.220
So the results that GPT3 got, I think were amazing.
link |
01:42:34.060
By the way, I actually didn't check carefully.
link |
01:42:38.620
It's funny you just mentioned
link |
01:42:40.540
how you coupled semantics to the multiplication.
link |
01:42:42.940
Is it able to do some basic math on two digit numbers?
link |
01:42:46.700
Yes.
link |
01:42:47.940
Okay, interesting.
link |
01:42:48.820
I thought there's a lot of failure cases.
link |
01:42:53.100
Yeah, it basically fails if you take larger digit numbers.
link |
01:42:56.140
So four digit numbers and so on makes carrying mistakes
link |
01:42:59.780
and so on.
link |
01:43:00.620
And if you take larger numbers,
link |
01:43:02.580
you don't get useful results at all.
link |
01:43:04.980
And this could be an issue of the training set
link |
01:43:09.260
where there are not many examples
link |
01:43:10.940
of successful long form addition
link |
01:43:13.300
and standard human written text.
link |
01:43:15.300
And humans aren't very good
link |
01:43:16.780
at doing three digit numbers either.
link |
01:43:19.460
Yeah, you're not writing a lot about it.
link |
01:43:22.340
And the other thing is that the loss function
link |
01:43:24.740
that is being used is only minimizing surprise.
link |
01:43:27.020
So it's predicting what comes next in the typical text.
link |
01:43:29.580
It's not trying to go for causal closure first
link |
01:43:32.260
as we do.
link |
01:43:33.100
Yeah.
link |
01:43:35.100
But the fact that that kind of prediction works
link |
01:43:39.620
to generate text that's semantically rich
link |
01:43:42.740
and consistent is interesting.
link |
01:43:45.020
Yeah.
link |
01:43:45.860
So yeah, so it's amazing that it's able
link |
01:43:47.220
to generate semantically consistent text.
link |
01:43:50.780
It's not consistent.
link |
01:43:51.940
So the problem is that it loses coherence at some point,
link |
01:43:54.660
but it's also, I think, not correct to say
link |
01:43:57.140
that GPT3 is unable to deal with semantics at all
link |
01:44:01.340
because you ask it to perform certain transformations
link |
01:44:04.100
in text and it performs these transformation in text.
link |
01:44:07.220
And the kind of additions that it's able
link |
01:44:09.220
to perform are transformations in text, right?
link |
01:44:12.540
And there are proper semantics involved.
link |
01:44:15.340
You can also do more.
link |
01:44:16.420
There was a paper that was generating lots
link |
01:44:19.900
and lots of mathematically correct text
link |
01:44:24.180
and was feeding this into a transformer.
link |
01:44:26.340
And as a result, it was able to learn
link |
01:44:29.340
how to do differentiation integration in race
link |
01:44:32.460
that according to the authors, Mathematica could not.
link |
01:44:37.340
To which some of the people in Mathematica responded
link |
01:44:39.860
that they were not using Mathematica in the right way
link |
01:44:42.700
and so on.
link |
01:44:43.540
I have not really followed the resolution of this conflict.
link |
01:44:46.380
This part, as a small tangent,
link |
01:44:48.700
I really don't like in machine learning papers,
link |
01:44:51.500
which they often do anecdotal evidence.
link |
01:44:56.620
They'll find like one example
link |
01:44:58.300
in some kind of specific use of Mathematica
link |
01:45:00.460
and demonstrate, look, here's,
link |
01:45:01.940
they'll show successes and failures,
link |
01:45:04.140
but they won't have a very clear representation
link |
01:45:07.660
of how many cases this actually represents.
link |
01:45:09.380
Yes, but I think as a first paper,
link |
01:45:11.260
this is a pretty good start.
link |
01:45:12.660
And so the take home message, I think,
link |
01:45:15.460
is that the authors could get better results
link |
01:45:19.420
from this in their experiments
link |
01:45:21.540
than they could get from the vein,
link |
01:45:23.460
which they were using computer algebra systems,
link |
01:45:25.940
which means that was not nothing.
link |
01:45:29.100
And it's able to perform substantially better
link |
01:45:32.340
than GPT's V can based on a much larger amount
link |
01:45:35.660
of training data using the same underlying algorithm.
link |
01:45:38.980
Well, let me ask, again,
link |
01:45:41.300
so I'm using your tweets as if this is like Plato, right?
link |
01:45:47.060
As if this is well thought out novels that you've written.
link |
01:45:51.780
You tweeted, GPT4 is listening to us now.
link |
01:45:58.660
This is one way of asking,
link |
01:46:00.300
what are the limitations of GPT3 when it scales?
link |
01:46:04.220
So what do you think will be the capabilities
link |
01:46:06.500
of GPT4, GPT5, and so on?
link |
01:46:10.260
What are the limits of this approach?
link |
01:46:11.780
So obviously when we are writing things right now,
link |
01:46:15.100
everything that we are writing now
link |
01:46:16.460
is going to be training data
link |
01:46:18.020
for the next generation of machine learning models.
link |
01:46:20.100
So yes, of course, GPT4 is listening to us.
link |
01:46:23.100
And I think the tweet is already a little bit older
link |
01:46:25.620
and we now have Voodao
link |
01:46:27.500
and we have a number of other systems
link |
01:46:30.140
that basically are placeholders for GPT4.
link |
01:46:33.620
Don't know what open AIS plans are in this regard.
link |
01:46:35.980
I read that tweet in several ways.
link |
01:46:39.060
So one is obviously everything you put on the internet
link |
01:46:42.700
is used as training data.
link |
01:46:44.660
But in a second way I read it is in a,
link |
01:46:49.580
we talked about agency.
link |
01:46:51.620
I read it as almost like GPT4 is intelligent enough
link |
01:46:55.460
to be choosing to listen.
link |
01:46:58.260
So not only like did a programmer tell it
link |
01:47:00.460
to collect this data and use it for training,
link |
01:47:03.700
I almost saw the humorous angle,
link |
01:47:06.220
which is like it has achieved AGI kind of thing.
link |
01:47:09.100
Well, the thing is, could we be already be living in GPT5?
link |
01:47:13.300
So GPT4 is listening and GPT5 actually constructing
link |
01:47:18.300
the entirety of the reality where we...
link |
01:47:20.420
Of course, in some sense,
link |
01:47:22.340
what everybody is trying to do right now in AI
link |
01:47:24.500
is to extend the transformer to be able to deal with video.
link |
01:47:28.980
And there are very promising extensions, right?
link |
01:47:31.860
There's a work by Google that is called Perceiver
link |
01:47:36.060
and that is overcoming some of the limitations
link |
01:47:39.300
of the transformer by letting it learn the topology
link |
01:47:41.980
of the different modalities separately.
link |
01:47:44.900
And by training it to find better input features.
link |
01:47:50.060
So basically feature abstractions that are being used
link |
01:47:52.540
by this successor to GPT3 are chosen such a way
link |
01:47:58.140
that it's able to deal with video input.
link |
01:48:00.780
And there is more to be done.
link |
01:48:02.220
So one of the limitations of GPT3 is that it's amnesiac.
link |
01:48:07.820
So it forgets everything beyond the two pages
link |
01:48:09.980
that it currently reads also during generation,
link |
01:48:12.340
not just during learning.
link |
01:48:14.420
Do you think that's fixable
link |
01:48:16.580
within the space of deep learning?
link |
01:48:18.700
Can you just make a bigger, bigger, bigger input?
link |
01:48:21.340
No, I don't think that our own working memory
link |
01:48:24.500
is infinitely large.
link |
01:48:25.620
It's probably also just a few thousand bits.
link |
01:48:28.020
But what you can do is you can structure
link |
01:48:31.060
this working memory.
link |
01:48:31.900
So instead of just force feeding this thing,
link |
01:48:34.980
a certain thing that it has to focus on,
link |
01:48:37.060
and it's not allowed to focus on anything else
link |
01:48:39.460
as its network,
link |
01:48:41.100
you allow it to construct its own working memory as we do.
link |
01:48:44.860
When we are reading a book,
link |
01:48:46.780
it's not that we are focusing our attention
link |
01:48:48.700
in such a way that we can only remember the current page.
link |
01:48:52.380
We will also try to remember other pages
link |
01:48:54.660
and try to undo what we learned from them
link |
01:48:56.860
or modify what we learned from them.
link |
01:48:58.660
We might get up and take another book from the shelf.
link |
01:49:01.020
We might go out and ask somebody,
link |
01:49:03.060
we can edit our working memory in any way that is useful
link |
01:49:06.860
to put a context together that allows us
link |
01:49:09.260
to draw the right inferences and to learn the right things.
link |
01:49:13.100
So this ability to perform experiments on the world
link |
01:49:16.380
based on an attempt to become fully coherent
link |
01:49:20.420
and to achieve causal closure,
link |
01:49:22.260
to achieve a certain aesthetic of your modeling,
link |
01:49:24.860
that is something that eventually needs to be done.
link |
01:49:28.300
And at the moment we are skirting this in some sense
link |
01:49:31.140
by building systems that are larger and faster
link |
01:49:33.420
so they can use dramatically larger resources
link |
01:49:36.100
and human beings can do and much more training data
link |
01:49:38.700
to get to models that in some sense
link |
01:49:40.420
are already way superhuman
link |
01:49:42.380
and in other ways are laughingly incoherent.
link |
01:49:45.500
So do you think sort of making the systems like,
link |
01:49:50.060
what would you say, multi resolutional?
link |
01:49:51.900
So like some of the language models
link |
01:49:56.980
are focused on two pages,
link |
01:49:59.820
some are focused on two books,
link |
01:50:03.380
some are focused on two years of reading,
link |
01:50:06.580
some are focused on a lifetime,
link |
01:50:08.100
so it's like stacks of GPT3s all the way down.
link |
01:50:11.900
You want to have gaps in between them.
link |
01:50:13.700
So it's not necessarily two years, there's no gaps.
link |
01:50:17.060
It's things out of two years or out of 20 years
link |
01:50:19.980
or 2,000 years or 2 billion years
link |
01:50:22.220
where you are just selecting those bits
link |
01:50:24.580
that are predicted to be the most useful ones
link |
01:50:27.540
to understand what you're currently doing.
link |
01:50:29.700
And this prediction itself requires a very complicated model
link |
01:50:32.780
and that's the actual model that you need to be making.
link |
01:50:34.780
It's not just that you are trying to understand
link |
01:50:36.940
the relationships between things,
link |
01:50:38.340
but what you need to make relationships,
link |
01:50:40.740
discover relationships over.
link |
01:50:42.540
I wonder what that thing looks like,
link |
01:50:45.500
what the architecture for the thing
link |
01:50:47.820
that's able to have that kind of model.
link |
01:50:50.100
I think it needs more degrees of freedom
link |
01:50:52.460
than the current models have.
link |
01:50:54.300
So it starts out with the fact that you possibly
link |
01:50:57.420
don't just want to have a feed forward model,
link |
01:50:59.700
but you want it to be fully recurrent.
link |
01:51:02.700
And to make it fully recurrent,
link |
01:51:04.540
you probably need to loop it back into itself
link |
01:51:06.660
and allow it to skip connections.
link |
01:51:08.220
Once you do this,
link |
01:51:09.980
when you're predicting the next frame
link |
01:51:12.020
and your internal next frame in every moment,
link |
01:51:15.180
and you are able to skip connection,
link |
01:51:17.500
it means that signals can travel from the output
link |
01:51:21.300
of the network into the middle of the network
link |
01:51:24.300
faster than the inputs do.
link |
01:51:25.900
Do you think it can still be differentiable?
link |
01:51:28.780
Do you think it still can be a neural network?
link |
01:51:30.660
Sometimes it can and sometimes it cannot.
link |
01:51:32.980
So it can still be a neural network,
link |
01:51:35.500
but not a fully differentiable one.
link |
01:51:37.180
And when you want to deal with non differentiable ones,
link |
01:51:40.900
you need to have an attention system
link |
01:51:42.780
that is discreet and two dimensional
link |
01:51:44.780
and can perform grammatical operations.
link |
01:51:46.660
You need to be able to perform program synthesis.
link |
01:51:49.300
You need to be able to backtrack
link |
01:51:51.220
in this operations that you perform on this thing.
link |
01:51:54.020
And this thing needs a model of what it's currently doing.
link |
01:51:56.460
And I think this is exactly the purpose
link |
01:51:58.420
of our own consciousness.
link |
01:52:01.220
Yeah, the program things are tricky on neural networks.
link |
01:52:05.380
So let me ask you, it's not quite program synthesis,
link |
01:52:09.020
but the application of these language models
link |
01:52:12.060
to generation, to program synthesis,
link |
01:52:15.060
but generation of programs.
link |
01:52:16.580
So if you look at GitHub OpenPilot,
link |
01:52:19.180
which is based on OpenAI's codecs,
link |
01:52:21.220
I don't know if you got a chance to look at it,
link |
01:52:22.780
but it's the system that's able to generate code
link |
01:52:26.180
once you prompt it with, what is it?
link |
01:52:30.100
Like the header of a function with some comments.
link |
01:52:32.700
And it seems to do an incredibly good job
link |
01:52:36.060
or not a perfect job, which is very important,
link |
01:52:39.220
but an incredibly good job of generating functions.
link |
01:52:42.780
What do you make of that?
link |
01:52:44.220
Are you, is this exciting
link |
01:52:45.500
or is this just a party trick, a demo?
link |
01:52:48.980
Or is this revolutionary?
link |
01:52:51.860
I haven't worked with it yet.
link |
01:52:52.980
So it's difficult for me to judge it,
link |
01:52:55.140
but I would not be surprised
link |
01:52:57.100
if it turns out to be a revolutionary.
link |
01:52:59.540
And that's because the majority of programming tasks
link |
01:53:01.740
that are being done in the industry right now
link |
01:53:04.260
are not creative.
link |
01:53:05.940
People are writing code that other people have written,
link |
01:53:08.500
or they're putting things together from code fragments
link |
01:53:10.620
that others have had.
link |
01:53:11.540
And a lot of the work that programmers do in practice
link |
01:53:14.300
is to figure out how to overcome the gaps
link |
01:53:17.780
in their current knowledge
link |
01:53:18.900
and the things that people have already done.
link |
01:53:20.940
How to copy and paste from Stack Overflow, that's right.
link |
01:53:24.060
And so of course we can automate that.
link |
01:53:26.380
Yeah, to make it much faster to copy and paste
link |
01:53:29.900
from Stack Overflow.
link |
01:53:30.860
Yes, but it's not just copying and pasting.
link |
01:53:32.820
It's also basically learning which parts you need to modify
link |
01:53:36.580
to make them fit together.
link |
01:53:38.140
Yeah, like literally sometimes as simple
link |
01:53:41.180
as just changing the variable names.
link |
01:53:43.380
So it fits into the rest of your code.
link |
01:53:45.060
Yes, but this requires that you understand the semantics
link |
01:53:47.460
of what you're doing to some degree.
link |
01:53:49.660
And you can automate some of those things.
link |
01:53:51.580
The thing that makes people nervous of course
link |
01:53:53.500
is that a little bit wrong in a program
link |
01:53:57.780
can have a dramatic effect on the actual final operation
link |
01:54:02.700
of that program.
link |
01:54:03.540
So that's one little error,
link |
01:54:05.380
which in the space of language doesn't really matter,
link |
01:54:08.780
but in the space of programs can matter a lot.
link |
01:54:11.980
Yes, but this is already what is happening
link |
01:54:14.100
when humans program code.
link |
01:54:15.820
Yeah, this is.
link |
01:54:16.820
So we have a technology to deal with this.
link |
01:54:20.220
Somehow it becomes scarier when you know
link |
01:54:23.500
that a program generated code
link |
01:54:25.100
that's running a nuclear power plant.
link |
01:54:27.940
It becomes scarier.
link |
01:54:29.100
You know, humans have errors too.
link |
01:54:31.380
Exactly.
link |
01:54:32.220
But it's scarier when a program is doing it
link |
01:54:35.100
because why, why?
link |
01:54:38.700
I mean, there's a fear that a program,
link |
01:54:43.660
like a program may not be as good as humans
link |
01:54:48.020
to know when stuff is important to not mess up.
link |
01:54:51.340
Like there's a misalignment of priorities of values
link |
01:55:00.260
that's potential.
link |
01:55:01.300
Maybe that's the source of the worry.
link |
01:55:03.500
I mean, okay, if I give you code generated
link |
01:55:06.380
by GitHub open pilot and code generated by a human
link |
01:55:12.500
and say here, use one of these,
link |
01:55:16.020
which how do you select today and in the next 10 years
link |
01:55:20.380
which code do you use?
link |
01:55:21.860
Wouldn't you still be comfortable with the human?
link |
01:55:25.380
At the moment when you go to Stanford to get an MRI,
link |
01:55:29.580
they will write a bill to the insurance over $20,000.
link |
01:55:34.580
And of this, maybe half of that gets paid by the insurance
link |
01:55:38.300
and a quarter gets paid by you.
link |
01:55:40.580
And the MRI cost them $600 to make maybe probably less.
link |
01:55:44.940
And what are the values of the person
link |
01:55:47.700
that writes the software and deploys this process?
link |
01:55:51.820
It's very difficult for me to say whether I trust people.
link |
01:55:56.260
I think that what happens there is a mixture
link |
01:55:58.700
of proper Anglo Saxon Protestant values
link |
01:56:01.980
where somebody is trying to serve an abstract radar hole
link |
01:56:04.860
and organize crime.
link |
01:56:06.300
Well, that's a very harsh,
link |
01:56:10.980
I think that's a harsh view of humanity.
link |
01:56:15.500
There's a lot of bad people, whether incompetent
link |
01:56:18.780
or just malevolent in this world, yes.
link |
01:56:21.700
But it feels like the more malevolent,
link |
01:56:25.820
so the more damage you do to the world,
link |
01:56:29.580
the more resistance you have in your own human heart.
link |
01:56:34.420
Yeah, but don't explain with malevolence or stupidity
link |
01:56:37.140
what can be explained by just people
link |
01:56:38.780
acting on their incentives.
link |
01:56:41.540
Right, so what happens in Stanford
link |
01:56:42.940
is not that somebody is evil.
link |
01:56:45.100
It's just that they do what they're being paid for.
link |
01:56:48.740
No, it's not evil.
link |
01:56:50.500
That's, I tend to, no, I see that as malevolence.
link |
01:56:53.740
I see as I, even like being a good German,
link |
01:56:58.780
as I told you offline, is some,
link |
01:57:01.540
it's not absolute malevolence,
link |
01:57:05.060
but it's a small amount, it's cowardice.
link |
01:57:07.460
I mean, when you see there's something wrong with the world,
link |
01:57:10.580
it's either incompetence and you're not able to see it,
link |
01:57:15.100
or it's cowardice that you're not able to stand up,
link |
01:57:17.780
not necessarily in a big way, but in a small way.
link |
01:57:21.620
So I do think that is a bit of malevolence.
link |
01:57:25.940
I'm not sure the example you're describing
link |
01:57:27.660
is a good example of that.
link |
01:57:28.500
So the question is, what is it that you are aiming for?
link |
01:57:31.220
And if you don't believe in the future,
link |
01:57:34.900
if you, for instance, think that the dollar is going to crash,
link |
01:57:37.460
why would you try to save dollars?
link |
01:57:39.540
If you don't think that humanity will be around
link |
01:57:42.580
in a hundred years from now,
link |
01:57:43.740
because global warming will wipe out civilization,
link |
01:57:47.500
why would you need to act as if it were?
link |
01:57:50.220
Right, so the question is,
link |
01:57:51.660
is there an overarching aesthetics
link |
01:57:53.980
that is projecting you and the world into the future,
link |
01:57:56.900
which I think is the basic idea of religion,
link |
01:57:59.020
that you understand the interactions
link |
01:58:01.220
that we have with each other
link |
01:58:02.340
as some kind of civilization level agent
link |
01:58:04.780
that is projecting itself into the future.
link |
01:58:07.180
If you don't have that shared purpose,
link |
01:58:10.420
what is there to be ethical for?
link |
01:58:12.940
So I think when we talk about ethics and AI,
link |
01:58:16.380
we need to go beyond the insane bias discussions and so on,
link |
01:58:20.020
where people are just measuring the distance
link |
01:58:22.060
between a statistic to their preferred current world model.
link |
01:58:27.940
The optimism, wait, wait, wait,
link |
01:58:29.340
I was a little confused by the previous thing,
link |
01:58:31.180
just to clarify.
link |
01:58:32.260
There is a kind of underlying morality
link |
01:58:39.820
to having an optimism that human civilization
link |
01:58:43.620
will persist for longer than a hundred years.
link |
01:58:45.580
Like I think a lot of people believe
link |
01:58:50.060
that it's a good thing for us to keep living.
link |
01:58:53.220
Yeah, of course.
link |
01:58:54.060
And thriving.
link |
01:58:54.900
This morality itself is not an end to itself.
link |
01:58:56.900
It's instrumental to people living in a hundred years
link |
01:58:59.660
from now or 500 years from now.
link |
01:59:03.100
So it's only justifiable if you actually think
link |
01:59:06.540
that it will lead to people or increase the probability
link |
01:59:09.780
of people being around in that timeframe.
link |
01:59:12.500
And a lot of people don't actually believe that,
link |
01:59:14.980
at least not actively.
link |
01:59:16.860
But believe what exactly?
link |
01:59:17.980
So I was...
link |
01:59:19.180
Most people don't believe
link |
01:59:20.620
that they can afford to act on such a model.
link |
01:59:23.500
Basically what happens in the US
link |
01:59:25.340
is I think that the healthcare system
link |
01:59:26.940
is for a lot of people no longer sustainable,
link |
01:59:28.940
which means that if they need the help
link |
01:59:30.580
of the healthcare system,
link |
01:59:31.540
they're often not able to afford it.
link |
01:59:33.460
And when they cannot help it,
link |
01:59:35.060
they are often going bankrupt.
link |
01:59:37.300
I think the leading cause of personal bankruptcy
link |
01:59:40.220
in the US is the healthcare system.
link |
01:59:42.740
And that would not be necessary.
link |
01:59:44.820
It's not because people are consuming
link |
01:59:46.660
more and more medical services
link |
01:59:48.780
and are achieving a much, much longer life as a result.
link |
01:59:51.460
That's not actually the story that is happening
link |
01:59:53.620
because you can compare it to other countries.
link |
01:59:55.300
And life expectancy in the US is currently not increasing
link |
01:59:58.420
and it's not as high as in all the other
link |
02:00:00.420
industrialized countries.
link |
02:00:01.700
So some industrialized countries are doing better
link |
02:00:03.820
with a much cheaper healthcare system.
link |
02:00:06.260
And what you can see is for instance,
link |
02:00:08.580
administrative bloat.
link |
02:00:09.980
The healthcare system has maybe to some degree
link |
02:00:13.660
deliberately set up as a job placement program
link |
02:00:17.220
to allow people to continue living
link |
02:00:19.740
in middle class existence,
link |
02:00:20.940
despite not having useful use case in productivity.
link |
02:00:25.940
So they are being paid to push paper around.
link |
02:00:28.700
And the number of administrator in the healthcare system
link |
02:00:31.540
has been increasing much faster
link |
02:00:33.180
than the number of practitioners.
link |
02:00:35.020
And this is something that you have to pay for.
link |
02:00:37.060
And also the revenues that are being generated
link |
02:00:40.460
in the healthcare system are relatively large
link |
02:00:42.220
and somebody has to pay for them.
link |
02:00:43.700
And the result why they are so large
link |
02:00:45.900
is because market mechanisms are not working.
link |
02:00:48.460
The FDA is largely not protecting people
link |
02:00:51.860
from malpractice of healthcare providers.
link |
02:00:55.380
The FDA is protecting healthcare providers
link |
02:00:58.660
from competition.
link |
02:00:59.940
Right, okay.
link |
02:01:00.780
So this is a thing that has to do with values.
link |
02:01:03.380
And this is not because people are malicious on all levels.
link |
02:01:06.500
It's because they are not incentivized
link |
02:01:08.460
to act on a greater whole on this idea
link |
02:01:11.380
that you treat somebody who comes to you as a patient,
link |
02:01:14.340
like you would treat a family member.
link |
02:01:15.660
Yeah, but we're trying, I mean,
link |
02:01:18.020
you're highlighting a lot of the flaws
link |
02:01:20.020
of the different institutions,
link |
02:01:21.220
the systems we're operating under,
link |
02:01:23.100
but I think there's a continued throughout history
link |
02:01:25.940
mechanism design of trying to design incentives
link |
02:01:29.380
in such a way that these systems behave
link |
02:01:31.420
better and better and better.
link |
02:01:32.780
I mean, it's a very difficult thing
link |
02:01:34.220
to operate a society of hundreds of millions of people
link |
02:01:38.140
effectively with.
link |
02:01:39.220
Yes, so do we live in a society that is ever correcting?
link |
02:01:42.820
Is this, do we observe that our models
link |
02:01:46.740
of what we are doing are predictive of the future
link |
02:01:49.420
and when they are not, we improve them.
link |
02:01:51.540
Are our laws adjudicated with clauses
link |
02:01:54.780
that you put into every law,
link |
02:01:56.020
what is meant to be achieved by that law
link |
02:01:57.900
and the law will be automatically repealed
link |
02:02:00.060
if it's not achieving that, right?
link |
02:02:01.340
If you are optimizing your own laws,
link |
02:02:03.220
if you're writing your own source code,
link |
02:02:05.140
you probably make an estimate of what is this thing
link |
02:02:08.180
that's currently wrong in my life?
link |
02:02:09.420
What is it that I should change about my own policies?
link |
02:02:12.180
What is the expected outcome?
link |
02:02:14.100
And if that outcome doesn't manifest,
link |
02:02:16.580
I will change the policy back, right?
link |
02:02:18.460
Or I would change it to something different.
link |
02:02:20.260
Are we doing this on a societal level?
link |
02:02:22.220
I think so.
link |
02:02:23.060
I think it's easy to sort of highlight the,
link |
02:02:25.580
I think we're doing it in the way that,
link |
02:02:29.020
like I operate my current life.
link |
02:02:30.380
I didn't sleep much last night.
link |
02:02:32.580
You would say that Lex,
link |
02:02:34.740
the way you need to operate your life
link |
02:02:35.980
is you need to always get sleep.
link |
02:02:37.340
The fact that you didn't sleep last night
link |
02:02:39.060
is totally the wrong way to operate in your life.
link |
02:02:43.060
Like you should have gotten all your shit done in time
link |
02:02:46.460
and gotten to sleep because sleep is very important
link |
02:02:48.940
for health and you're highlighting,
link |
02:02:50.540
look, this person is not sleeping.
link |
02:02:52.500
Look, the medical, the healthcare system is operating poor.
link |
02:02:56.380
But the point is we just,
link |
02:02:59.140
it seems like this is the way,
link |
02:03:00.460
especially in the capitalist society, we operate.
link |
02:03:02.700
We keep running into trouble and last minute,
link |
02:03:05.740
we try to get our way out through innovation
link |
02:03:09.260
and it seems to work.
link |
02:03:10.740
You have a lot of people that ultimately are trying
link |
02:03:13.380
to build a better world and get urgency about them
link |
02:03:18.380
when the problem becomes more and more imminent.
link |
02:03:22.900
And that's the way this operates.
link |
02:03:24.380
But if you look at the long arc of history,
link |
02:03:29.820
it seems like that operating on deadlines
link |
02:03:34.180
produces progress and builds better and better systems.
link |
02:03:36.980
You probably agree with me that the US
link |
02:03:39.060
should have engaged in mask production in January 2020
link |
02:03:44.580
and that we should have shut down the airports early on
link |
02:03:47.900
and that we should have made it mandatory
link |
02:03:50.940
that the people that work in nursing homes
link |
02:03:53.860
are living on campus rather than living at home
link |
02:03:57.940
and then coming in and infecting people in the nursing homes
link |
02:04:01.460
that had no immune response to COVID.
link |
02:04:03.900
And that is something that was, I think, visible back then.
link |
02:04:08.180
The correct decisions haven't been made.
link |
02:04:10.540
We would have the same situation again.
link |
02:04:12.580
How do we know that these wrong decisions
link |
02:04:14.340
are not being made again?
link |
02:04:15.780
Have the people that made the decisions
link |
02:04:17.620
to not protect the nursing homes been punished?
link |
02:04:20.580
Have the people that made the wrong decisions
link |
02:04:23.180
with respect to testing that prevented the development
link |
02:04:26.780
of testing by startup companies and the importing
link |
02:04:29.500
of tests from countries that already had them,
link |
02:04:32.140
have these people been held responsible?
link |
02:04:34.460
First of all, so what do you wanna put
link |
02:04:37.380
before the firing squad?
link |
02:04:38.780
I think they are being held responsible.
link |
02:04:39.860
No, just make sure that this doesn't happen again.
link |
02:04:41.820
No, but it's not that, yes, they're being held responsible
link |
02:04:46.220
by many voices, by people being frustrated.
link |
02:04:48.820
There's new leaders being born now
link |
02:04:50.740
that we're going to see rise to the top in 10 years.
link |
02:04:54.220
This moves slower than, there's obviously
link |
02:04:57.380
a lot of older incompetence and bureaucracy
link |
02:05:01.220
and these systems move slowly.
link |
02:05:03.660
They move like science, one death at a time.
link |
02:05:06.860
So yes, I think the pain that's been felt
link |
02:05:11.340
in the previous year is reverberating throughout the world.
link |
02:05:15.540
Maybe I'm getting old, I suspect that every generation
link |
02:05:18.340
in the US after the war has lost the plot even more.
link |
02:05:21.500
I don't see this development.
link |
02:05:23.140
The war, World War II?
link |
02:05:24.700
Yes, so basically there was a time when we were modernist
link |
02:05:29.140
and in this modernist time, the US felt actively threatened
link |
02:05:33.620
by the things that happened in the world.
link |
02:05:35.740
The US was worried about possibility of failure
link |
02:05:39.660
and this imminence of possible failure led to decisions.
link |
02:05:44.580
There was a time when the government would listen
link |
02:05:47.340
to physicists about how to do things
link |
02:05:50.540
and the physicists were actually concerned
link |
02:05:52.100
about what the government should be doing.
link |
02:05:53.580
So they would be writing letters to the government
link |
02:05:56.100
and so for instance, the decision for the Manhattan Project
link |
02:05:58.860
was something that was driven in a conversation
link |
02:06:01.740
between physicists and the government.
link |
02:06:04.020
I don't think such a discussion would take place today.
link |
02:06:06.900
I disagree, I think if the virus was much deadlier,
link |
02:06:10.500
we would see a very different response.
link |
02:06:12.620
I think the virus was not sufficiently deadly
link |
02:06:14.900
and instead because it wasn't very deadly,
link |
02:06:17.420
what happened is the current system
link |
02:06:20.420
started to politicize it.
link |
02:06:21.980
The mask, this is what I realized with masks early on,
link |
02:06:25.300
they were not, very quickly became not as a solution
link |
02:06:29.540
but they became a thing that politicians used
link |
02:06:32.620
to divide the country.
link |
02:06:33.900
So the same things happened with vaccines, same thing.
link |
02:06:36.940
So like nobody's really,
link |
02:06:38.740
people weren't talking about solutions to this problem
link |
02:06:41.100
because I don't think the problem was bad enough.
link |
02:06:43.060
When you talk about the war,
link |
02:06:45.020
I think our lives are too comfortable.
link |
02:06:48.740
I think in the developed world, things are too good
link |
02:06:52.220
and we have not faced severe dangers.
link |
02:06:54.900
When the danger, the severe dangers,
link |
02:06:57.500
existential threats are faced, that's when we step up
link |
02:07:00.740
on a small scale and a large scale.
link |
02:07:02.980
Now, I don't, that's sort of my argument here
link |
02:07:07.980
but I did think the virus is, I was hoping
link |
02:07:11.700
that it was actually sufficiently dangerous
link |
02:07:16.060
for us to step up because especially in the early days,
link |
02:07:18.660
it was unclear, it still is unclear because of mutations,
link |
02:07:23.260
how bad it might be, right?
link |
02:07:25.820
And so I thought we would step up and even,
link |
02:07:30.700
so the masks point is a tricky one because to me,
link |
02:07:35.540
the manufacture of masks isn't even the problem.
link |
02:07:38.780
I'm still to this day and I was involved
link |
02:07:41.020
with a bunch of this work, have not seen good science done
link |
02:07:44.300
on whether masks work or not.
link |
02:07:46.860
Like there still has not been a large scale study.
link |
02:07:49.420
To me, that should be, there should be large scale studies
link |
02:07:51.820
and every possible solution, like aggressive
link |
02:07:55.140
in the same way that the vaccine development
link |
02:07:56.780
was aggressive.
link |
02:07:57.740
There should be masks, which tests,
link |
02:07:59.860
what kind of tests work really well, what kind of,
link |
02:08:03.740
like even the question of how the virus spreads.
link |
02:08:06.860
There should be aggressive studies on that to understand.
link |
02:08:09.820
I'm still, as far as I know, there's still a lot
link |
02:08:12.860
of uncertainty about that.
link |
02:08:14.180
Nobody wants to see this as an engineering problem
link |
02:08:17.100
that needs to be solved.
link |
02:08:18.540
It's that I was surprised about, but I wouldn't.
link |
02:08:21.940
So I find that our views are largely convergent
link |
02:08:24.580
but not completely.
link |
02:08:25.460
So I agree with the thing that because our society
link |
02:08:29.340
in some sense perceives itself as too big to fail.
link |
02:08:32.580
Right.
link |
02:08:33.420
The virus did not alert people to the fact
link |
02:08:35.940
that we are facing possible failure
link |
02:08:38.820
that basically put us into the postmodernist mode.
link |
02:08:41.540
And I don't mean in a philosophical sense
link |
02:08:43.260
but in a societal sense.
link |
02:08:45.220
The difference between the postmodern society
link |
02:08:47.940
and the modern society is that the modernist society
link |
02:08:50.540
has to deal with the ground truth
link |
02:08:52.340
and the postmodernist society has to deal with appearances.
link |
02:08:55.540
Politics becomes a performance
link |
02:08:57.820
and the performance is done for an audience
link |
02:08:59.780
and the organized audience is the media.
link |
02:09:02.260
And the media evaluates itself via other media, right?
link |
02:09:05.380
So you have an audience of critics that evaluate themselves.
link |
02:09:09.100
And I don't think it's so much the failure
link |
02:09:10.660
of the politicians because to get in power
link |
02:09:12.820
and to stay in power, you need to be able
link |
02:09:15.700
to deal with the published opinion.
link |
02:09:17.580
Well, I think it goes in cycles
link |
02:09:19.220
because what's going to happen is all
link |
02:09:22.300
of the small business owners, all the people
link |
02:09:24.820
who truly are suffering and will suffer more
link |
02:09:27.900
because the effects of the closure of the economy
link |
02:09:31.940
and the lack of solutions to the virus,
link |
02:09:34.220
they're going to apprise.
link |
02:09:36.300
And hopefully, I mean, this is where charismatic leaders
link |
02:09:40.180
can get the world in trouble
link |
02:09:42.620
but hopefully will elect great leaders
link |
02:09:47.900
that will break through this postmodernist idea
link |
02:09:51.100
of the media and the perception
link |
02:09:55.420
and the drama on Twitter and all that kind of stuff.
link |
02:09:57.660
But you know, this can go either way.
link |
02:09:59.340
Yeah.
link |
02:10:00.300
When the Weimar Republic was unable to deal
link |
02:10:03.620
with the economic crisis that Germany was facing,
link |
02:10:07.540
there was an option to go back.
link |
02:10:10.340
But there were people which thought,
link |
02:10:11.620
let's get back to a constitutional monarchy
link |
02:10:14.420
and let's get this to work because democracy doesn't work.
link |
02:10:18.780
And eventually, there was no way back.
link |
02:10:21.740
People decided there was no way back.
link |
02:10:23.300
They needed to go forward.
link |
02:10:24.420
And the only options for going forward
link |
02:10:26.700
was to become Stalinist communist,
link |
02:10:29.460
basically an option to completely expropriate
link |
02:10:34.260
the factories and so on and nationalize them
link |
02:10:36.820
and to reorganize Germany in communist terms
link |
02:10:40.540
and ally itself with Stalin and fascism.
link |
02:10:44.660
And both options were obviously very bad.
link |
02:10:47.980
And the one that the Germans picked
link |
02:10:49.980
led to a catastrophe that devastated Europe.
link |
02:10:54.300
And I'm not sure if the US has an immune response
link |
02:10:57.180
against that.
link |
02:10:58.020
I think that the far right is currently very weak in the US,
link |
02:11:01.340
but this can easily change.
link |
02:11:05.740
Do you think from a historical perspective,
link |
02:11:08.780
Hitler could have been stopped
link |
02:11:10.820
from within Germany or from outside?
link |
02:11:14.020
Or this, well, depends on who you wanna focus,
link |
02:11:17.780
whether you wanna focus on Stalin or Hitler,
link |
02:11:20.220
but it feels like Hitler was the one
link |
02:11:22.380
as a political movement that could have been stopped.
link |
02:11:25.180
I think that the point was that a lot of people
link |
02:11:28.700
wanted Hitler, so he got support from a lot of quarters.
link |
02:11:32.340
There was a number of industrialists who supported him
link |
02:11:35.100
because they thought that the democracy
link |
02:11:36.780
is obviously not working and unstable
link |
02:11:38.460
and you need a strong man.
link |
02:11:40.620
And he was willing to play that part.
link |
02:11:43.220
There were also people in the US who thought
link |
02:11:45.460
that Hitler would stop Stalin
link |
02:11:47.860
and would act as a bulwark against Bolshevism,
link |
02:11:52.300
which he probably would have done, right?
link |
02:11:54.140
But at which cost?
link |
02:11:56.220
And then many of the things that he was going to do,
link |
02:11:59.740
like the Holocaust, was something where people thought
link |
02:12:03.780
this is rhetoric, he's not actually going to do this.
link |
02:12:07.140
Especially many of the Jews themselves, which were humanists.
link |
02:12:10.020
And for them, this was outside of the scope
link |
02:12:12.300
that was thinkable.
link |
02:12:13.260
Right.
link |
02:12:14.260
I mean, I wonder if Hitler is uniquely,
link |
02:12:20.140
I wanna carefully use this term, but uniquely evil.
link |
02:12:23.500
So if Hitler was never born,
link |
02:12:26.420
if somebody else would come in this place.
link |
02:12:29.100
So like, just thinking about the progress of history,
link |
02:12:33.780
how important are those singular figures
link |
02:12:36.700
that lead to mass destruction and cruelty?
link |
02:12:40.980
Because my sense is Hitler was unique.
link |
02:12:47.540
It wasn't just about the environment
link |
02:12:49.420
and the context that gave him,
link |
02:12:51.620
like another person would not come in his place
link |
02:12:54.740
to do as destructive of the things that he did.
link |
02:12:58.220
There was a combination of charisma, of madness,
link |
02:13:02.860
of psychopathy, of just ego, all those things,
link |
02:13:07.180
which are very unlikely to come together
link |
02:13:09.540
in one person in the right time.
link |
02:13:12.540
It also depends on the context of the country
link |
02:13:14.660
that you're operating in.
link |
02:13:16.500
If you tell the Germans that they have a historical destiny
link |
02:13:22.220
in this romantic country,
link |
02:13:23.820
the effect is probably different
link |
02:13:25.500
than it is in other countries.
link |
02:13:27.180
But Stalin has killed a few more people than Hitler did.
link |
02:13:33.620
And if you look at the probability
link |
02:13:35.820
that you survived under Stalin,
link |
02:13:39.220
Hitler killed people if he thought
link |
02:13:43.140
they were not worth living,
link |
02:13:45.140
or if they were harmful to his racist project.
link |
02:13:49.260
He basically felt that the Jews would be too cosmopolitan
link |
02:13:52.580
and would not be willing to participate
link |
02:13:55.140
in the racist redefinition of society
link |
02:13:57.500
and the value of society,
link |
02:13:58.780
and there is no state in this way
link |
02:14:01.420
that he wanted to have it.
link |
02:14:03.260
So he saw them as harmful danger,
link |
02:14:06.980
especially since they played such an important role
link |
02:14:09.460
in the economy and culture of Germany.
link |
02:14:13.300
And so basically he had some radical
link |
02:14:18.020
but rational reason to murder them.
link |
02:14:20.780
And Stalin just killed everyone.
link |
02:14:23.420
Basically the Stalinist purges were such a random thing
link |
02:14:26.140
where he said that there's a certain possibility
link |
02:14:31.580
that this particular part of the population
link |
02:14:34.660
has a number of German collaborators or something,
link |
02:14:36.740
and we just kill them all, right?
link |
02:14:38.820
Or if you look at what Mao did,
link |
02:14:40.660
the number of people that were killed
link |
02:14:42.980
in absolute numbers were much higher under Mao
link |
02:14:45.620
than they were under Stalin.
link |
02:14:47.660
So it's super hard to say.
link |
02:14:49.820
The other thing is that you look at Genghis Khan and so on,
link |
02:14:53.540
how many people he killed.
link |
02:14:56.100
When you see there are a number of things
link |
02:14:58.900
that happen in human history
link |
02:14:59.900
that actually really put a substantial dent
link |
02:15:02.500
in the existing population, or Napoleon.
link |
02:15:05.900
And it's very difficult to eventually measure it
link |
02:15:09.500
because what's happening is basically evolution
link |
02:15:12.020
on a human scale where one monkey figures out
link |
02:15:17.220
a way to become viral and is using this viral technology
link |
02:15:22.380
to change the patterns of society
link |
02:15:24.500
at the very, very large scale.
link |
02:15:26.500
And what we find so abhorrent about these changes
link |
02:15:29.860
is the complexity that is being destroyed by this.
link |
02:15:32.340
That's basically like a big fire that burns out
link |
02:15:34.940
a lot of the existing culture and structure
link |
02:15:36.780
that existed before.
link |
02:15:38.060
Yeah, and it all just starts with one monkey.
link |
02:15:42.580
One charismatic ape.
link |
02:15:44.460
And there's a bunch of them throughout history.
link |
02:15:46.060
Yeah, but it's in a given environment.
link |
02:15:47.940
It's basically similar to wildfires in California, right?
link |
02:15:51.100
The temperature is rising.
link |
02:15:53.260
There is less rain falling.
link |
02:15:55.540
And then suddenly a single spark can have an effect
link |
02:15:57.900
that in other times would be contained.
link |
02:16:00.780
Okay, speaking of which, I love how we went
link |
02:16:04.620
to Hitler and Stalin from 20, 30 minutes ago,
link |
02:16:09.020
GPT3 generating, doing programs that this is.
link |
02:16:13.620
The argument was about morality of AI versus human.
link |
02:16:23.580
And specifically in the context of writing programs,
link |
02:16:26.220
specifically in the context of programs
link |
02:16:28.540
that can be destructive.
link |
02:16:29.940
So running nuclear power plants
link |
02:16:31.860
or autonomous weapons systems, for example.
link |
02:16:35.100
And I think your inclination was to say that
link |
02:16:39.580
it's not so obvious that AI would be less moral than humans
link |
02:16:43.460
or less effective at making a world
link |
02:16:46.740
that would make humans happy.
link |
02:16:48.660
So I'm not talking about self directed systems
link |
02:16:52.660
that are making their own goals at a global scale.
link |
02:16:57.100
If you just talk about the deployment
link |
02:16:59.140
of technological systems that are able to see order
link |
02:17:03.380
and patterns and use this as control models
link |
02:17:05.580
to act on the goals that we give them,
link |
02:17:08.420
then if we have the correct incentives
link |
02:17:11.140
to set the correct incentives for these systems,
link |
02:17:13.180
I'm quite optimistic.
link |
02:17:16.060
So humans versus AI, let me give you an example.
link |
02:17:20.660
Autonomous weapon system.
link |
02:17:23.900
Let's say there's a city somewhere in the Middle East
link |
02:17:26.900
that has a number of terrorists.
link |
02:17:30.380
And the question is,
link |
02:17:32.180
what's currently done with drone technologies,
link |
02:17:35.020
you have information about the location
link |
02:17:37.100
of a particular terrorist and you have a targeted attack,
link |
02:17:40.620
you have a bombing of that particular building.
link |
02:17:43.980
And that's all directed by humans
link |
02:17:45.900
at the high level strategy
link |
02:17:47.980
and also at the deployment of individual bombs and missiles
link |
02:17:50.580
like the actual, everything is done by human
link |
02:17:53.940
except the final targeting.
link |
02:17:56.700
And it's like spot, similar thing, like control the flight.
link |
02:18:01.820
Okay, what if you give AI control and saying,
link |
02:18:07.860
write a program that says,
link |
02:18:10.300
here's the best information I have available
link |
02:18:12.180
about the location of these five terrorists,
link |
02:18:14.780
here's the city, make sure all the bombing you do
link |
02:18:17.820
is constrained to the city, make sure it's precision based,
link |
02:18:21.820
but you take care of it.
link |
02:18:22.860
So you do one level of abstraction out
link |
02:18:25.660
and saying, take care of the terrorists in the city.
link |
02:18:29.580
Which are you more comfortable with,
link |
02:18:31.420
the humans or the JavaScript GPT3 generated code
link |
02:18:35.700
that's doing the deployment?
link |
02:18:38.220
I mean, this is the kind of question I'm asking,
link |
02:18:42.340
is the kind of bugs that we see in human nature,
link |
02:18:47.100
are they better or worse than the kind of bugs we see in AI?
link |
02:18:51.220
There are different bugs.
link |
02:18:52.460
There is an issue that if people are creating
link |
02:18:55.980
an imperfect automation of a process
link |
02:18:59.900
that normally requires a moral judgment,
link |
02:19:02.860
and this moral judgment is the reason
link |
02:19:05.940
why it cannot be automated often,
link |
02:19:07.460
it's not because the computation is too expensive,
link |
02:19:12.180
but because the model that you give the AI
link |
02:19:14.300
is not an adequate model of the dynamics of the world,
link |
02:19:17.500
because the AI does not understand the context
link |
02:19:19.780
that it's operating in the right way.
link |
02:19:21.940
And this is something that already happens with Excel.
link |
02:19:24.940
You don't need to have an AI system to do this.
link |
02:19:27.860
You have an automated process in place
link |
02:19:30.340
where humans decide using automated criteria
link |
02:19:33.180
whom to kill when and whom to target when,
link |
02:19:36.020
which already happens.
link |
02:19:38.220
And you have no way to get off the kill list
link |
02:19:40.300
once that happens, once you have been targeted
link |
02:19:42.860
according to some automatic criterion
link |
02:19:44.860
by people in a bureaucracy, that is the issue.
link |
02:19:48.980
The issue is not the AI, it's the automation.
link |
02:19:52.260
So there's something about, right, it's automation,
link |
02:19:56.380
but there's something about the,
link |
02:19:58.820
there's a certain level of abstraction
link |
02:20:00.660
where you give control to AI to do the automation.
link |
02:20:04.340
There's a scale that can be achieved
link |
02:20:07.100
that it feels like the scale of bug and scale mistake
link |
02:20:10.780
and scale of destruction that can be achieved
link |
02:20:14.580
of the kind that humans cannot achieve.
link |
02:20:16.860
So AI is much more able to destroy
link |
02:20:19.620
an entire country accidentally versus humans.
link |
02:20:22.580
It feels like the more civilians die as they react
link |
02:20:27.300
or suffer as the consequences of your decisions,
link |
02:20:30.900
the more weight there is on the human mind
link |
02:20:34.260
to make that decision.
link |
02:20:36.380
And so like, it becomes more and more unlikely
link |
02:20:39.060
to make that decision for humans.
link |
02:20:41.460
For AI, it feels like it's harder to encode
link |
02:20:44.900
that kind of weight.
link |
02:20:47.020
In a way, the AI that we're currently building
link |
02:20:49.620
is automating statistics, right?
link |
02:20:51.900
Intelligence is the ability to make models
link |
02:20:53.860
so you can act on them,
link |
02:20:55.220
and AI is the tool to make better models.
link |
02:20:58.340
So in principle, if you're using AI wisely,
link |
02:21:01.540
you're able to prevent more harm.
link |
02:21:04.220
And I think that the main issue is not on the side of the AI,
link |
02:21:07.860
it's on the side of the human command hierarchy
link |
02:21:09.940
that is using technology irresponsibly.
link |
02:21:12.300
So the question is how hard is it to encode,
link |
02:21:15.740
to properly encode the right incentives into the AI?
link |
02:21:19.060
So for instance, there's this idea
link |
02:21:21.420
of what happens if we let our airplanes being flown
link |
02:21:24.460
with AI systems and the neural network is a black box
link |
02:21:27.620
and so on.
link |
02:21:28.460
And it turns out our neural networks
link |
02:21:30.220
are actually not black boxes anymore.
link |
02:21:32.300
There are function approximators using linear algebra,
link |
02:21:36.620
and there are performing things that we can understand.
link |
02:21:40.020
But we can also, instead of letting the neural network
link |
02:21:42.820
fly the airplane, use the neural network
link |
02:21:44.940
to generate a provably correct program.
link |
02:21:47.420
There's a degree of accuracy of the proof
link |
02:21:49.900
that a human could not achieve.
link |
02:21:51.860
And so we can use our AI by combining
link |
02:21:54.100
different technologies to build systems
link |
02:21:56.460
that are much more reliable than the systems
link |
02:21:58.340
that a human being could create.
link |
02:22:00.420
And so in this sense, I would say that
link |
02:22:03.900
if you use an early stage of technology to save labor
link |
02:22:08.340
and don't employ competent people,
link |
02:22:11.340
but just to hack something together because you can,
link |
02:22:14.180
that is very dangerous.
link |
02:22:15.260
And if people are acting under these incentives
link |
02:22:17.220
that they get away with delivering shoddy work
link |
02:22:20.380
more cheaply using AI with less human oversight than before,
link |
02:22:23.980
that's very dangerous.
link |
02:22:25.140
The thing is though, AI is still going to be unreliable,
link |
02:22:28.980
perhaps less so than humans,
link |
02:22:30.420
but it'll be unreliable in novel ways.
link |
02:22:33.820
And...
link |
02:22:35.340
Yeah, but this is an empirical question.
link |
02:22:37.180
And it's something that we can figure out and work with.
link |
02:22:39.860
So the issue is, do we trust the systems,
link |
02:22:43.100
the social systems that we have in place
link |
02:22:45.340
and the social systems that we can build and maintain
link |
02:22:48.020
that they're able to use AI responsibly?
link |
02:22:50.420
If they can, then AI is good news.
link |
02:22:52.980
If they cannot,
link |
02:22:54.100
then it's going to make the existing problems worse.
link |
02:22:57.220
Well, and also who creates the AI, who controls it,
link |
02:23:00.100
who makes money from it because it's ultimately humans.
link |
02:23:03.140
And then you start talking about
link |
02:23:05.060
how much you trust the humans.
link |
02:23:06.940
So the question is, what does who mean?
link |
02:23:08.740
I don't think that we have identity per se.
link |
02:23:11.140
I think that the story of a human being is somewhat random.
link |
02:23:15.500
What happens is more or less that everybody is acting
link |
02:23:18.420
on their local incentives,
link |
02:23:19.780
what they perceive to be their incentives.
link |
02:23:21.980
And the question is, what are the incentives
link |
02:23:24.620
that the one that is pressing the button is operating under?
link |
02:23:28.500
Yeah.
link |
02:23:30.140
It's nice for those incentives to be transparent.
link |
02:23:32.620
So, for example, I'll give you an example.
link |
02:23:36.060
There seems to be a significant distrust
link |
02:23:38.700
of a tech, like entrepreneurs in the tech space
link |
02:23:44.380
or people that run, for example, social media companies
link |
02:23:47.380
like Mark Zuckerberg.
link |
02:23:49.980
There's not a complete transparency of incentives
link |
02:23:53.060
under which that particular human being operates.
link |
02:23:58.940
We can listen to the words he says
link |
02:24:00.700
or what the marketing team says for a company,
link |
02:24:02.980
but we don't know.
link |
02:24:04.220
And that becomes a problem when the algorithms
link |
02:24:08.260
and the systems created by him and other people
link |
02:24:12.780
in that company start having more and more impact
link |
02:24:15.780
on society.
link |
02:24:17.180
And that it starts, if the incentives were somehow
link |
02:24:21.940
the definition and the explainability of the incentives
link |
02:24:26.020
was decentralized such that nobody can manipulate it,
link |
02:24:30.860
no propaganda type manipulation of like
link |
02:24:35.580
how these systems actually operate could be done,
link |
02:24:38.020
then yes, I think AI could achieve much fairer,
link |
02:24:45.340
much more effective sort of like solutions
link |
02:24:50.580
to difficult ethical problems.
link |
02:24:53.260
But when there's like humans in the loop,
link |
02:24:55.780
manipulating the dissemination, the communication
link |
02:25:00.580
of how the system actually works,
link |
02:25:02.420
that feels like you can run into a lot of trouble.
link |
02:25:05.300
And that's why there's currently a lot of distrust
link |
02:25:07.740
for people at the heads of companies
link |
02:25:10.180
that have increasingly powerful AI systems.
link |
02:25:13.900
I suspect what happened traditionally in the US
link |
02:25:16.860
was that since our decision making
link |
02:25:18.700
is much more decentralized than in an authoritarian state,
link |
02:25:22.980
people are making decisions autonomously
link |
02:25:24.780
at many, many levels in a society.
link |
02:25:26.980
What happened that was we created coherence
link |
02:25:30.260
and cohesion in society by controlling what people thought
link |
02:25:33.940
and what information they had.
link |
02:25:35.740
The media synchronized public opinion
link |
02:25:38.740
and social media have disrupted this.
link |
02:25:40.340
It's not, I think so much Russian influence or something,
link |
02:25:43.780
it's everybody's influence.
link |
02:25:45.460
It's that a random person can come up
link |
02:25:47.860
with a conspiracy theory and disrupt what people think.
link |
02:25:52.460
And if that conspiracy theory is more compelling
link |
02:25:55.460
or more attractive than the standardized
link |
02:25:58.180
public conspiracy theory that we give people as a default,
link |
02:26:01.860
then it might get more traction, right?
link |
02:26:03.460
You suddenly have the situation that a single individual
link |
02:26:05.940
somewhere on a farm in Texas has more listeners than CNN.
link |
02:26:11.140
Which particular farmer are you referring to in Texas?
link |
02:26:17.380
Probably no.
link |
02:26:19.180
Yes, I had dinner with him a couple of times, okay.
link |
02:26:21.700
Right, it's an interesting situation
link |
02:26:23.420
because you cannot get to be an anchor in CNN
link |
02:26:25.940
if you don't go through a complicated gatekeeping process.
link |
02:26:30.420
And suddenly you have random people
link |
02:26:32.460
without that gatekeeping process,
link |
02:26:34.900
just optimizing for attention.
link |
02:26:36.980
Not necessarily with a lot of responsibility
link |
02:26:39.540
for the longterm effects of projecting these theories
link |
02:26:42.700
into the public.
link |
02:26:43.900
And now there is a push of making social media
link |
02:26:46.980
more like traditional media,
link |
02:26:48.380
which means that the opinion that is being projected
link |
02:26:51.380
in social media is more limited to an acceptable range.
link |
02:26:54.660
With the goal of getting society into safe waters
link |
02:26:58.380
and increase the stability and cohesion of society again,
link |
02:27:00.820
which I think is a laudable goal.
link |
02:27:03.140
But of course it also is an opportunity
link |
02:27:05.100
to seize the means of indoctrination.
link |
02:27:08.340
And the incentives that people are under when they do this
link |
02:27:11.420
are in such a way that the AI ethics that we would need
link |
02:27:17.140
becomes very often something like AI politics,
link |
02:27:20.620
which is basically partisan and ideological.
link |
02:27:23.380
And this means that whatever one side says,
link |
02:27:26.180
another side is going to be disagreeing with, right?
link |
02:27:28.380
In the same way as when you turn masks or the vaccine
link |
02:27:31.740
into a political issue,
link |
02:27:33.140
if you say that it is politically virtuous
link |
02:27:35.700
to get vaccinated,
link |
02:27:36.660
it will mean that the people that don't like you
link |
02:27:39.260
will not want to get vaccinated, right?
link |
02:27:41.020
And as soon as you have this partisan discourse,
link |
02:27:43.620
it's going to be very hard to make the right decisions
link |
02:27:47.140
because the incentives get to be the wrong ones.
link |
02:27:48.860
AI ethics needs to be super boring.
link |
02:27:51.180
It needs to be done by people who do statistics
link |
02:27:53.300
all the time and have extremely boring,
link |
02:27:56.540
long winded discussions that most people cannot follow
link |
02:27:59.620
because they are too complicated,
link |
02:28:00.900
but that are dead serious.
link |
02:28:02.540
These people need to be able to be better at statistics
link |
02:28:05.820
than the leading machine learning researchers.
link |
02:28:07.940
And at the moment, the AI ethics debate is the one
link |
02:28:12.060
where you don't have any barrier to entry, right?
link |
02:28:14.460
Everybody who has a strong opinion
link |
02:28:16.820
and is able to signal that opinion in the right way
link |
02:28:18.860
can enter it.
link |
02:28:19.700
And to me, that is a very frustrating thing
link |
02:28:24.340
because the field is so crucially important
link |
02:28:26.260
to our future.
link |
02:28:27.100
It's so crucially important,
link |
02:28:28.260
but the only qualification you currently need
link |
02:28:31.860
is to be outraged by the injustice in the world.
link |
02:28:34.740
It's more complicated, right?
link |
02:28:36.220
Everybody seems to be outraged.
link |
02:28:37.860
But let's just say that the incentives
link |
02:28:40.740
are not always the right ones.
link |
02:28:42.020
So basically, I suspect that a lot of people
link |
02:28:45.500
that enter this debate don't have a vision
link |
02:28:48.140
for what society should be looking like
link |
02:28:50.020
in a way that is nonviolent,
link |
02:28:51.380
where we preserve liberal democracy,
link |
02:28:53.580
where we make sure that we all get along
link |
02:28:56.300
and we are around in a few hundred years from now,
link |
02:29:00.420
preferably with a comfortable
link |
02:29:02.180
technological civilization around us.
link |
02:29:04.820
I generally have a very foggy view of that world,
link |
02:29:10.060
but I tend to try to follow,
link |
02:29:12.060
and I think society should in some degree
link |
02:29:13.900
follow the gradient of love,
link |
02:29:16.340
increasing the amount of love in the world.
link |
02:29:18.940
And whenever I see different policies
link |
02:29:21.100
or algorithms or ideas that are not doing so,
link |
02:29:24.460
obviously, that's the ones that kind of resist.
link |
02:29:27.900
So the thing that terrifies me about this notion
link |
02:29:30.740
is I think that German fascism was driven by love.
link |
02:29:35.660
It was just a very selective love.
link |
02:29:37.820
It was a love that basically...
link |
02:29:39.140
Now you're just manipulating.
link |
02:29:40.460
I mean, that's, you have to be very careful.
link |
02:29:45.460
You're talking to the wrong person in this way about love.
link |
02:29:50.580
So let's talk about what love is.
link |
02:29:52.540
And I think that love is the discovery of shared purpose.
link |
02:29:55.980
It's the recognition of the sacred in the other.
link |
02:29:59.700
And this enables non transactional interactions.
link |
02:30:02.780
But the size of the other that you include
link |
02:30:07.740
needs to be maximized.
link |
02:30:09.740
So it's basically appreciation,
link |
02:30:14.700
like deep appreciation of the world around you fully,
link |
02:30:23.540
including the people that are very different than you,
link |
02:30:25.940
people that disagree with you completely,
link |
02:30:27.700
including people, including living creatures
link |
02:30:30.180
outside of just people, including ideas.
link |
02:30:33.460
And it's like appreciation of the full mess of it.
link |
02:30:36.580
And also it has to do with like empathy,
link |
02:30:40.380
which is coupled with a lack of confidence
link |
02:30:44.020
and certainty of your own rightness.
link |
02:30:47.140
It's like a radical open mindedness to the way forward.
link |
02:30:51.140
I agree with every part of what you said.
link |
02:30:53.460
And now if you scale it up,
link |
02:30:54.980
what you recognize is that Lafist is in some sense,
link |
02:30:58.540
the service to next level agency,
link |
02:31:01.380
to the highest level agency that you can recognize.
link |
02:31:04.220
It could be for instance, life on earth or beyond that,
link |
02:31:07.860
where you could say intelligent complexity in the universe
link |
02:31:11.620
that you try to maximize in a certain way.
link |
02:31:14.100
But when you think it's true,
link |
02:31:15.860
it basically means a certain aesthetic.
link |
02:31:18.980
And there is not one possible aesthetic,
link |
02:31:20.820
there are many possible aesthetics.
link |
02:31:22.660
And once you project an aesthetic into the future,
link |
02:31:25.420
you can see that there are some which defect from it,
link |
02:31:29.260
which are in conflict with it,
link |
02:31:30.900
that are corrupt, that are evil.
link |
02:31:33.860
You and me would probably agree that Hitler was evil
link |
02:31:37.100
because the aesthetic of the world that he wanted
link |
02:31:39.980
is in conflict with the aesthetic of the world
link |
02:31:41.940
that you and me have in mind.
link |
02:31:44.540
And so they think that he destroyed,
link |
02:31:48.500
we want to keep them in the world.
link |
02:31:50.660
There's a kind of, there's kind of ways to deal,
link |
02:31:55.220
I mean, Hitler is an easier case,
link |
02:31:56.660
but perhaps he wasn't so easy in the 30s, right?
link |
02:31:59.180
To understand who is Hitler and who is not.
link |
02:32:02.380
No, it was just there was no consensus
link |
02:32:04.580
that the aesthetics that he had in mind were unacceptable.
link |
02:32:07.500
Yeah, I mean, it's difficult, love is complicated
link |
02:32:12.900
because you can't just be so open minded
link |
02:32:17.300
that you let evil walk into the door,
link |
02:32:20.660
but you can't be so self assured
link |
02:32:24.420
that you can always identify evil perfectly
link |
02:32:29.580
because that's what leads to Nazi Germany.
link |
02:32:32.620
Having a certainty of what is and wasn't evil,
link |
02:32:34.860
like always drawing lines of good versus evil.
link |
02:32:38.660
There seems to be, there has to be a dance
link |
02:32:42.940
between like hard stances extending up
link |
02:32:49.900
against what is wrong.
link |
02:32:51.340
And at the same time, empathy and open mindedness
link |
02:32:55.420
of towards not knowing what is right and wrong
link |
02:32:59.580
and like a dance between those.
link |
02:33:01.420
I found that when I watched the Miyazaki movies
link |
02:33:03.620
that there is nobody who captures my spirituality
link |
02:33:06.060
as well as he does.
link |
02:33:07.940
It's very interesting and just vicious, right?
link |
02:33:10.620
There is something going on in his movies
link |
02:33:13.100
that is very interesting.
link |
02:33:14.140
So for instance, Mononoke is discussing
link |
02:33:17.140
not only an answer to Disney's simplistic notion of Mowgli,
link |
02:33:22.380
the jungle boy was raised by wolves.
link |
02:33:24.980
And as soon as he sees people realizes that he's one of them
link |
02:33:27.780
and the way in which the moral life and nature
link |
02:33:32.780
is simplified and romanticized and turned into kitsch.
link |
02:33:36.020
It's disgusting in the Disney movie.
link |
02:33:37.700
And he answers to this, you see,
link |
02:33:39.820
he's replaced by Mononoke, this wolf girl
link |
02:33:42.260
who was raised by wolves and was fierce and dangerous
link |
02:33:44.860
and who cannot be socialized because she cannot be tamed.
link |
02:33:48.780
You cannot be part of human society.
link |
02:33:50.460
And you see human society,
link |
02:33:51.900
it's something that is very, very complicated.
link |
02:33:53.700
You see people extracting resources and destroying nature.
link |
02:33:57.780
But the purpose is not to be evil,
link |
02:34:00.740
but to be able to have a life that is free from,
link |
02:34:04.740
for instance, oppression and violence
link |
02:34:07.140
and to curb death and disease.
link |
02:34:10.860
And you basically see this conflict
link |
02:34:13.260
which cannot be resolved in a certain way.
link |
02:34:15.180
You see this moment when nature is turned into a garden
link |
02:34:18.340
and it loses most of what it actually is
link |
02:34:20.980
and humans no longer submitting to life and death
link |
02:34:23.420
and nature and to these questions, there is no easy answer.
link |
02:34:26.820
So it just turns it into something that is being observed
link |
02:34:29.980
as a journey that happens.
link |
02:34:31.180
And that happens with a certain degree of inevitability.
link |
02:34:34.940
And the nice thing about all his movies
link |
02:34:37.100
is there's a certain main character
link |
02:34:38.740
and it's the same in all movies.
link |
02:34:41.260
It's this little girl that is basically Heidi.
link |
02:34:45.740
And I suspect that happened because when he did field work
link |
02:34:50.540
for working on the Heidi movies back then,
link |
02:34:53.020
the Heidi animations, before he did his own movies,
link |
02:34:55.700
he traveled to Switzerland and South Eastern Europe
link |
02:35:00.220
and the Adriatic and so on and got an idea
link |
02:35:03.220
about a certain aesthetic and a certain way of life
link |
02:35:05.340
that informed his future thinking.
link |
02:35:08.140
And Heidi has a very interesting relationship
link |
02:35:11.020
to herself and to the world.
link |
02:35:13.300
There's nothing that she takes for herself.
link |
02:35:15.940
She's in a way fearless because she is committed
link |
02:35:18.780
to a service, to a greater whole.
link |
02:35:20.860
Basically, she is completely committed to serving God.
link |
02:35:24.100
And it's not an institutionalized God.
link |
02:35:26.300
It has nothing to do with the Roman Catholic Church
link |
02:35:28.500
or something like this.
link |
02:35:30.420
But in some sense, Heidi is an embodiment
link |
02:35:32.660
of the spirit of European Protestantism.
link |
02:35:35.780
It's this idea of a being that is completely perfect
link |
02:35:38.780
and pure.
link |
02:35:40.180
And it's not a feminist vision
link |
02:35:42.060
because she is not a girl boss or something like this.
link |
02:35:48.620
She is the justification for the men in the audience
link |
02:35:52.460
to protect her, to build a civilization around her
link |
02:35:54.780
that makes her possible.
link |
02:35:56.580
So she is not just the sacrifice of Jesus
link |
02:35:59.260
who is innocent and therefore nailed to the cross.
link |
02:36:02.740
She is not being sacrificed.
link |
02:36:04.060
She is being protected by everybody around her
link |
02:36:07.020
who recognizes that she is sacred.
link |
02:36:08.620
And there are enough around her to see that.
link |
02:36:12.060
So this is a very interesting perspective.
link |
02:36:14.020
There's a certain notion of innocence.
link |
02:36:16.340
And this notion of innocence is not universal.
link |
02:36:18.500
It's not in all cultures.
link |
02:36:20.140
Hitler wasn't innocent.
link |
02:36:21.500
His idea of Germany was not that there is an innocence
link |
02:36:25.620
that is being protected.
link |
02:36:26.900
There was a predator that was going to triumph.
link |
02:36:29.700
And it's also something that is not at the core
link |
02:36:31.420
of every religion.
link |
02:36:32.260
There are many religions which don't care about innocence.
link |
02:36:34.860
They might care about increasing the status of something.
link |
02:36:41.020
And that's a very interesting notion that is quite unique
link |
02:36:44.980
and not claiming it's the optimal one.
link |
02:36:47.620
It's just a particular kind of aesthetic
link |
02:36:49.940
which I think makes Miyazaki
link |
02:36:51.780
into the most relevant Protestant philosopher today.
link |
02:36:55.500
And you're saying in terms of all the ways
link |
02:36:59.780
that a society can operate perhaps the preservation
link |
02:37:02.020
of innocence might be one of the best.
link |
02:37:07.140
No, it's just my aesthetic.
link |
02:37:09.780
So it's a particular way in which I feel
link |
02:37:13.620
that I relate to the world that is natural
link |
02:37:15.420
to my own socialization.
link |
02:37:16.700
And maybe it's not an accident
link |
02:37:18.300
that I have cultural roots in Europe
link |
02:37:22.380
in a particular world.
link |
02:37:23.380
And so maybe it's a natural convergence point
link |
02:37:26.620
and it's not something that you will find
link |
02:37:28.500
in all other times in history.
link |
02:37:30.980
So I'd like to ask you about Solzhenitsyn
link |
02:37:33.980
and our individual role as ants in this very large society.
link |
02:37:39.460
So he says that some version of the line
link |
02:37:42.060
between good and evil runs to the heart of every man.
link |
02:37:44.700
Do you think all of us are capable of good and evil?
link |
02:37:47.340
Like what's our role in this play
link |
02:37:53.500
in this game we're all playing?
link |
02:37:55.580
Is all of us capable to play any role?
link |
02:37:59.020
Like, is there an ultimate responsibility
link |
02:38:00.980
to you mentioned maintaining innocence
link |
02:38:04.300
or whatever the highest ideal for a society you want
link |
02:38:09.140
are all of us capable of living up to that?
link |
02:38:11.540
And that's our responsibility
link |
02:38:13.340
or is there significant limitations
link |
02:38:15.900
to what we're able to do in terms of good and evil?
link |
02:38:21.340
So there is a certain way if you are not terrible,
link |
02:38:24.060
if you are committed to some kind of civilizational agency,
link |
02:38:29.460
a next level agent that you are serving,
link |
02:38:31.140
some kind of transcendent principle.
link |
02:38:34.260
In the eyes of that transcendental principle,
link |
02:38:36.300
you are able to discern good from evil.
link |
02:38:38.060
Otherwise you cannot,
link |
02:38:39.020
otherwise you have just individual aesthetics.
link |
02:38:41.660
The cat that is torturing a mouse is not evil
link |
02:38:44.060
because the cat does not envision
link |
02:38:46.340
or no part of the world of the cat is envisioning a world
link |
02:38:50.660
where there is no violence and nobody is suffering.
link |
02:38:53.740
If you have an aesthetic where you want
link |
02:38:55.500
to protect innocence,
link |
02:38:56.940
then torturing somebody needlessly is evil,
link |
02:39:00.900
but only then.
link |
02:39:02.740
No, but within, I guess the question is within the aesthetic,
link |
02:39:05.660
like within your sense of what is good and evil,
link |
02:39:10.260
are we still, it seems like we're still able
link |
02:39:14.460
to commit evil.
link |
02:39:17.140
Yes, so basically if you are committing
link |
02:39:19.340
to this next level agent,
link |
02:39:20.820
you are not necessarily are this next level agent, right?
link |
02:39:23.580
You are a part of it.
link |
02:39:24.420
You have a relationship to it,
link |
02:39:26.020
like the cell does to its organism, its hyperorganism.
link |
02:39:29.700
And it only exists to the degree
link |
02:39:31.340
that it's being implemented by you and others.
link |
02:39:34.580
And that means that you're not completely fully serving it.
link |
02:39:38.540
You have freedom in what you decide,
link |
02:39:40.340
whether you are acting on your impulses
link |
02:39:42.100
and local incentives and your farewell impulses,
link |
02:39:44.500
so to speak, or whether you're committing to it.
link |
02:39:47.140
And what you perceive then is a tension
link |
02:39:49.980
between what you would be doing with respect
link |
02:39:53.100
to the thing that you recognize as the sacred, if you do,
link |
02:39:57.300
and what you're actually doing.
link |
02:39:58.820
And this is the line between good and evil,
link |
02:40:01.460
right where you see, oh, I'm here acting
link |
02:40:03.100
on my local incentives or impulses,
link |
02:40:05.700
and here I'm acting on what I consider to be sacred.
link |
02:40:08.100
And there's a tension between those.
link |
02:40:09.780
And this is the line between good and evil
link |
02:40:11.940
that might run through your heart.
link |
02:40:14.380
And if you don't have that,
link |
02:40:15.700
if you don't have this relationship
link |
02:40:17.180
to a transcendental agent,
link |
02:40:18.660
you could call this relationship
link |
02:40:19.980
to the next level agent soul, right?
link |
02:40:21.700
It's not a thing.
link |
02:40:22.540
It's not an immortal thing that is intrinsically valuable.
link |
02:40:25.780
It's a certain kind of relationship
link |
02:40:27.460
that you project to understand what's happening.
link |
02:40:29.580
Somebody is serving this transcendental sacredness
link |
02:40:31.900
or they're not.
link |
02:40:33.220
If you don't have a soul, you cannot be evil.
link |
02:40:35.860
You're just a complex natural phenomenon.
link |
02:40:39.620
So if you look at life, like starting today
link |
02:40:42.140
or starting tomorrow, when we leave here today,
link |
02:40:46.020
there's a bunch of trajectories
link |
02:40:48.180
that you can take through life, maybe countless.
link |
02:40:53.780
Do you think some of these trajectories,
link |
02:40:57.300
in your own conception of yourself,
link |
02:40:59.700
some of those trajectories are the ideal life,
link |
02:41:04.220
a life that if you were to be the hero of your life story,
link |
02:41:09.620
you would want to be?
link |
02:41:10.860
Like, is there some Josh or Bhakti you're striving to be?
link |
02:41:14.500
Like, this is the question I ask myself
link |
02:41:15.980
as an individual trying to make a better world
link |
02:41:20.260
in the best way that I could conceive of.
link |
02:41:22.540
What is my responsibility there?
link |
02:41:24.660
And how much am I responsible for the failure to do so?
link |
02:41:28.260
Because I'm lazy and incompetent too often.
link |
02:41:33.260
In my own perception.
link |
02:41:35.740
In my own worldview, I'm not very important.
link |
02:41:38.260
So it's, I don't have place for me as a hero
link |
02:41:41.540
in my own world.
link |
02:41:43.460
I'm trying to do the best that I can,
link |
02:41:45.980
which is often not very good.
link |
02:41:48.060
And so it's not important for me to have status
link |
02:41:52.820
or to be seen in a particular way.
link |
02:41:55.500
It's helpful if others can see me
link |
02:41:57.380
or a few people can see me that can be my friends.
link |
02:41:59.780
No, sorry, I want to clarify,
link |
02:42:01.460
the hero I didn't mean status or perception
link |
02:42:05.220
or like some kind of marketing thing,
link |
02:42:09.660
but more in private, in the quiet of your own mind.
link |
02:42:14.060
Is there the kind of man you want to be
link |
02:42:16.940
and would consider it a failure if you don't become that?
link |
02:42:20.460
That's what I meant by hero.
link |
02:42:21.940
Yeah, not really.
link |
02:42:23.300
I don't perceive myself as having such an identity.
link |
02:42:26.140
And it's also sometimes frustrating,
link |
02:42:32.340
but it's basically a lack of having this notion
link |
02:42:37.940
of father that I need to be emulating.
link |
02:42:44.020
It's interesting.
link |
02:42:44.980
I mean, it's the leaf floating down the river.
link |
02:42:48.660
I worry that...
link |
02:42:50.220
Sometimes it's more like being the river.
link |
02:42:59.020
I'm just a fat frog sitting on a leaf
link |
02:43:02.740
on a dirty, muddy lake.
link |
02:43:06.620
I wish I was waiting for a princess to kiss me.
link |
02:43:13.540
Or the other way, I forgot which way it goes.
link |
02:43:15.780
Somebody kisses somebody.
link |
02:43:17.180
I can ask you, I don't know if you know
link |
02:43:20.420
who Michael Malice is,
link |
02:43:21.700
but in terms of constructing since systems of incentives,
link |
02:43:27.060
it's interesting to ask.
link |
02:43:29.540
I don't think I've talked to you about this before.
link |
02:43:33.060
Malice espouses anarchism.
link |
02:43:35.700
So he sees all government as fundamentally
link |
02:43:40.660
getting in the way or even being destructive
link |
02:43:42.940
to collaborations between human beings thriving.
link |
02:43:49.660
What do you think?
link |
02:43:50.500
What's the role of government in a society that thrives?
link |
02:43:56.900
Is anarchism at all compelling to you as a system?
link |
02:44:00.580
So like not just small government,
link |
02:44:02.980
but no government at all.
link |
02:44:05.940
Yeah, I don't see how this would work.
link |
02:44:09.860
The government is an agent that imposes an offset
link |
02:44:12.700
on your reward function, on your payout metrics.
link |
02:44:15.580
So your behavior becomes compatible with the common good.
link |
02:44:20.860
So the argument there is that you can have collectives
link |
02:44:25.620
like governing organizations, but not government,
link |
02:44:28.620
like where you're born in a particular set of land
link |
02:44:32.540
and therefore you must follow this rule or else.
link |
02:44:38.420
You're forced by what they call violence
link |
02:44:41.820
because there's an implied violence here.
link |
02:44:44.900
So the key aspect of government is it protects you
link |
02:44:52.020
from the rest of the world with an army and with police.
link |
02:44:56.700
So it has a monopoly on violence.
link |
02:45:00.020
It's the only one that's able to do violence.
link |
02:45:02.060
So there are many forms of government,
link |
02:45:03.540
not all governments do that.
link |
02:45:05.020
But we find that in successful countries,
link |
02:45:09.660
the government has a monopoly on violence.
link |
02:45:12.740
And that means that you cannot get ahead
link |
02:45:15.700
by starting your own army because the government
link |
02:45:17.740
will come down on you and destroy you
link |
02:45:19.340
if you try to do that.
link |
02:45:20.940
And in countries where you can build your own army
link |
02:45:23.260
and get away with it, some people will do it.
link |
02:45:25.700
And these countries is what we call failed countries
link |
02:45:28.580
in a way.
link |
02:45:30.060
And if you don't want to have violence,
link |
02:45:33.500
the point is not to appeal to the moral intentions of people
link |
02:45:36.860
because some people will use strategies
link |
02:45:39.180
if they get ahead with them that feel a particular kind
link |
02:45:41.820
of ecological niche.
link |
02:45:42.740
So you need to destroy that ecological niche.
link |
02:45:45.260
And if effective government has a monopoly on violence,
link |
02:45:50.060
it can create a world where nobody is able to use violence
link |
02:45:53.460
and get ahead.
link |
02:45:54.820
So you want to use that monopoly on violence,
link |
02:45:57.060
not to exert violence, but to make violence impossible,
link |
02:46:00.100
to raise the cost of violence.
link |
02:46:02.140
So people need to get ahead with nonviolent means.
link |
02:46:06.100
So the idea is that you might be able to achieve that
link |
02:46:09.300
in an anarchist state with companies.
link |
02:46:12.220
So with the forces of capitalism is create security companies
link |
02:46:18.260
where the one that's most ethically sound rises to the top.
link |
02:46:21.980
Basically, it would be a much better representative
link |
02:46:24.220
of the people because there is a less sort of stickiness
link |
02:46:29.220
to the big military force sticking around
link |
02:46:33.220
even though it's long overlived, outlived.
link |
02:46:36.420
So you have groups of militants that are hopefully
link |
02:46:40.060
efficiently organized because otherwise they're going
link |
02:46:41.940
to lose against the other groups of militants
link |
02:46:44.580
and they are coordinating themselves with the rest
link |
02:46:47.060
of society until they are having a monopoly on violence.
link |
02:46:51.220
How is that different from a government?
link |
02:46:53.940
So it's basically converging to the same thing.
link |
02:46:56.220
So I was trying to argue with Malice,
link |
02:47:00.020
I feel like it always converges towards government at scale,
link |
02:47:03.060
but I think the idea is you can have a lot of collectives
link |
02:47:06.100
that are, you basically never let anything scale too big.
link |
02:47:11.820
So one of the problems with governments is it gets too big
link |
02:47:15.460
in terms of like the size of the group
link |
02:47:19.820
over which it has control.
link |
02:47:23.980
My sense is that would happen anyway.
link |
02:47:27.060
So a successful company like Amazon or Facebook,
link |
02:47:30.660
I mean, it starts forming a monopoly
link |
02:47:33.060
over the entire populations,
link |
02:47:36.060
not over just the hundreds of millions,
link |
02:47:37.900
but billions of people.
link |
02:47:39.340
So I don't know, but there is something
link |
02:47:43.540
about the abuses of power the government can have
link |
02:47:46.060
when it has a monopoly on violence, right?
link |
02:47:49.020
And so that's a tension there, but...
link |
02:47:53.020
So the question is how can you set the incentives
link |
02:47:55.180
for government correctly?
link |
02:47:56.420
And this mostly applies at the highest levels of government
link |
02:47:59.940
and because we haven't found a way to set them correctly,
link |
02:48:02.940
we made the highest levels of government relatively weak.
link |
02:48:06.300
And this is, I think, part of the reason
link |
02:48:08.580
why we had difficulty to coordinate the pandemic response
link |
02:48:12.260
and China didn't have that much difficulty.
link |
02:48:14.940
And there is, of course, a much higher risk
link |
02:48:17.500
of the abuse of power that exists in China
link |
02:48:19.980
because the power is largely unchecked.
link |
02:48:22.740
And that's basically what happens
link |
02:48:25.260
in the next generation, for instance.
link |
02:48:26.540
Imagine that we would agree
link |
02:48:28.380
that the current government of China is largely correct
link |
02:48:30.460
and benevolent, and maybe we don't agree on this,
link |
02:48:33.180
but if we did, how can we make sure
link |
02:48:36.100
that this stays like this?
link |
02:48:37.540
And if you don't have checks and balances,
link |
02:48:40.300
division of power, it's hard to achieve.
link |
02:48:42.980
You don't have a solution for that problem.
link |
02:48:45.300
But the abolishment of government
link |
02:48:47.420
basically would remove the control structure.
link |
02:48:49.540
From a cybernetic perspective,
link |
02:48:51.540
there is an optimal point in the system
link |
02:48:54.740
that the regulation should be happening, right?
link |
02:48:56.460
That you can measure the current incentives
link |
02:48:59.780
and the regulator would be properly incentivized
link |
02:49:01.940
to make the right decisions
link |
02:49:03.740
and change the payout metrics of everything below it
link |
02:49:06.340
in such a way that the local prisoners dilemmas
link |
02:49:08.620
get resolved, right?
link |
02:49:09.900
You cannot resolve the prisoners dilemma
link |
02:49:12.060
without some kind of eternal control
link |
02:49:14.900
that emulates an infinite game in a way.
link |
02:49:19.060
Yeah, I mean, there's a sense in which
link |
02:49:22.380
it seems like the reason government,
link |
02:49:24.940
the parts of government that don't work well currently
link |
02:49:27.780
is because there's not good mechanisms
link |
02:49:34.020
through which to interact,
link |
02:49:35.380
for the citizenry to interact with government
link |
02:49:37.300
is basically it hasn't caught up in terms of technology.
link |
02:49:41.500
And I think once you integrate
link |
02:49:43.860
some of the digital revolution
link |
02:49:46.100
of being able to have a lot of access to data,
link |
02:49:48.420
be able to vote on different ideas at a local level,
link |
02:49:52.060
at all levels, at the optimal level
link |
02:49:54.780
like you're saying that can resolve the prisoner dilemmas
link |
02:49:58.580
and to integrate AI to help you automate things
link |
02:50:01.380
that don't require the human ingenuity.
link |
02:50:07.460
I feel like that's where government could operate that well
link |
02:50:10.340
and can also break apart the inefficient bureaucracies
link |
02:50:14.620
if needed.
link |
02:50:15.460
There'll be a strong incentive to be efficient and successful.
link |
02:50:20.620
So out human history, we see an evolution
link |
02:50:23.020
and evolutionary competition of modes of government
link |
02:50:25.660
and of individual governments is in these modes.
link |
02:50:28.180
And every nation state in some sense
link |
02:50:29.900
is some kind of organism that has found different solutions
link |
02:50:33.180
for the problem of government.
link |
02:50:34.980
And you could look at all these different models
link |
02:50:37.500
and the different scales at which it exists
link |
02:50:39.420
as empirical attempts to validate the idea
link |
02:50:43.020
of how to build a better government.
link |
02:50:45.780
And I suspect that the idea of anarchism
link |
02:50:49.180
similar to the idea of communism
link |
02:50:51.900
is the result of being disenchanted
link |
02:50:54.860
with the ugliness of the real existing solutions
link |
02:50:57.340
and the attempt to get to an utopia.
link |
02:51:00.980
And I suspect that communism originally was not a utopia.
link |
02:51:04.540
I think that in the same way as original Christianity,
link |
02:51:07.580
it had a particular kind of vision.
link |
02:51:10.020
And this vision is a society,
link |
02:51:12.540
a mode of organization within the society
link |
02:51:15.300
in which humans can coexist at scale without coercion.
link |
02:51:20.300
In the same way as we do in a healthy family, right?
link |
02:51:23.660
In a good family,
link |
02:51:24.500
you don't terrorize each other into compliance,
link |
02:51:28.060
but you understand what everybody needs
link |
02:51:30.300
and what everybody is able to contribute
link |
02:51:32.220
and what the intended future of the whole thing is.
link |
02:51:35.260
And everybody coordinates their behavior in the right way
link |
02:51:38.380
and informs each other about how to do this.
link |
02:51:40.820
And all the interactions that happen
link |
02:51:42.540
are instrumental to making that happen, right?
link |
02:51:45.820
Could this happen at scale?
link |
02:51:47.260
And I think this is the idea of communism.
link |
02:51:49.180
Communism is opposed to the idea
link |
02:51:51.420
that we need economic terror
link |
02:51:53.340
or other forms of terror to make that happen.
link |
02:51:55.700
But in practice, what happened
link |
02:51:56.860
is that the proto communist countries,
link |
02:51:59.300
the real existing socialism,
link |
02:52:01.140
replaced a part of the economic terror with moral terror,
link |
02:52:04.620
right?
link |
02:52:05.460
So we were told to do the right thing for moral reasons.
link |
02:52:07.540
And of course it didn't really work
link |
02:52:09.180
and the economy eventually collapsed.
link |
02:52:11.620
And the moral terror had actual real cost, right?
link |
02:52:14.540
People were in prison
link |
02:52:15.820
because they were morally noncompliant.
link |
02:52:17.820
And the other thing is that the idea of communism
link |
02:52:22.900
became a utopia.
link |
02:52:24.060
So it basically was projected into the afterlife.
link |
02:52:26.140
We were told in my childhood
link |
02:52:28.660
that communism was a hypothetical society
link |
02:52:31.220
to which we were in a permanent revolution
link |
02:52:33.380
that justified everything
link |
02:52:34.660
that was presently wrong with society morally.
link |
02:52:37.500
But it was something that our grandchildren
link |
02:52:39.540
probably would not ever see
link |
02:52:41.140
because it was too ideal and too far in the future
link |
02:52:43.860
to make it happen right now.
link |
02:52:44.980
And people were just not there yet morally.
link |
02:52:47.300
And the same thing happened with Christianity, right?
link |
02:52:50.300
This notion of heaven was mythologized
link |
02:52:52.820
and projected into an afterlife.
link |
02:52:54.380
And I think this was just the idea of God's kingdom
link |
02:52:56.900
of this world in which we instantiate
link |
02:52:59.180
the next level transcendental agent in the perfect form.
link |
02:53:01.980
So everything goes smoothly and without violence
link |
02:53:04.660
and without conflict and without this human messiness
link |
02:53:07.980
on this economic messiness and the terror and coercion
link |
02:53:11.340
that existed in the present societies.
link |
02:53:13.820
And the idea of that the humans can exist at some point
link |
02:53:16.980
exist at scale in a harmonious way and noncoercively
link |
02:53:20.180
is untested, right?
link |
02:53:21.700
A lot of people tested it
link |
02:53:23.140
but didn't get it to work so far.
link |
02:53:25.340
And the utopia is a world in where you get
link |
02:53:27.740
all the good things without any of the bad things.
link |
02:53:30.900
And you are, I think very susceptible to believe in utopias
link |
02:53:34.500
when you are very young and don't understand
link |
02:53:36.860
that everything has to happen in causal patterns,
link |
02:53:39.580
that there's always feedback loops
link |
02:53:40.940
that ultimately are closed.
link |
02:53:42.620
There's nothing that just happens
link |
02:53:44.020
because it's good or bad.
link |
02:53:45.460
Good or bad don't exist in isolation.
link |
02:53:47.220
They only exist with respect to larger systems.
link |
02:53:50.660
So can you intuit why utopias fail as systems?
link |
02:53:57.780
So like having a utopia that's out there beyond the horizon
link |
02:54:01.620
is it because then,
link |
02:54:04.980
it's not only because it's impossible to achieve utopias
link |
02:54:08.220
but it's because what certain humans,
link |
02:54:11.940
certain small number of humans start to sort of greedily
link |
02:54:20.220
attain power and money and control and influence
link |
02:54:25.540
as they become,
link |
02:54:28.980
as they see the power in using this idea of a utopia
link |
02:54:34.420
for propaganda.
link |
02:54:35.260
It's a bit like saying, why is my garden not perfect?
link |
02:54:37.260
It's because some evil weeds are overgrowing it
link |
02:54:39.780
and they always do, right?
link |
02:54:41.540
But this is not how it works.
link |
02:54:43.220
A good garden is a system that is in balance
link |
02:54:45.420
and requires minimal interactions by the gardener.
link |
02:54:48.620
And so you need to create a system
link |
02:54:51.860
that is designed to self stabilize.
link |
02:54:54.220
And the design of social systems
link |
02:54:55.860
requires not just the implementation
link |
02:54:57.500
of the desired functionality,
link |
02:54:58.780
but the next level design, also in biological systems.
link |
02:55:01.820
You need to create a system that wants to converge
link |
02:55:04.140
to the intended function.
link |
02:55:06.100
And so instead of just creating an institution like the FDA
link |
02:55:09.380
that is performing a particular kind of role in society,
link |
02:55:13.180
you need to make sure that the FDA is actually driven
link |
02:55:15.780
by a system that wants to do this optimally,
link |
02:55:18.180
that is incentivized to do it optimally
link |
02:55:19.860
and then makes the performance that is actually enacted
link |
02:55:23.340
in every generation instrumental to that thing,
link |
02:55:26.220
that actual goal, right?
link |
02:55:27.660
And that is much harder to design and to achieve.
link |
02:55:30.100
See if the design a system where,
link |
02:55:32.460
and listen communism also was quote unquote incentivized
link |
02:55:36.940
to be a feedback loop system that achieves that utopia.
link |
02:55:43.500
It's just, it wasn't working given human nature.
link |
02:55:45.820
The incentives were not correct given human nature.
link |
02:55:47.980
How do you incentivize people
link |
02:55:50.460
when they are getting coal off the ground
link |
02:55:52.340
to work as hard as possible?
link |
02:55:53.900
Because it's a terrible job
link |
02:55:55.540
and it's very bad for your health.
link |
02:55:57.060
And right, how do you do this?
link |
02:55:59.540
And you can give them prices and medals and status
link |
02:56:03.620
to some degree, right?
link |
02:56:04.580
There's only so much status to give for that.
link |
02:56:06.900
And most people will not fall for this, right?
link |
02:56:09.340
Or you can pay them and you probably have to pay them
link |
02:56:12.940
in an asymmetric way because if you pay everybody the same
link |
02:56:15.700
and you nationalize the coal mines,
link |
02:56:19.100
eventually people will figure out
link |
02:56:20.620
that they can game the system.
link |
02:56:21.940
Yes, so you're describing capitalism.
link |
02:56:25.820
So capitalism is the present solution to the system.
link |
02:56:28.620
And what we also noticed that I think that Marx was correct
link |
02:56:32.140
in saying that capitalism is prone to crisis,
link |
02:56:35.140
that capitalism is a system that in its dynamics
link |
02:56:38.460
is not convergent, but divergent.
link |
02:56:40.780
It's not a stable system.
link |
02:56:42.860
And that eventually it produces an enormous potential
link |
02:56:47.380
for productivity, but it also is systematically
link |
02:56:50.820
misallocating resources.
link |
02:56:52.140
So a lot of people cannot participate
link |
02:56:54.700
in the production and consumption anymore, right?
link |
02:56:57.300
And this is what we observed.
link |
02:56:58.420
We observed that the middle class in the US is tiny.
link |
02:57:01.460
It's a lot of people think that they're middle class,
link |
02:57:05.500
but if you are still flying economy,
link |
02:57:07.460
you're not middle class, right?
link |
02:57:11.660
Every class is a magnitude smaller than the previous class.
link |
02:57:14.700
And I think about classes is really like airline class.
link |
02:57:23.060
I like class.
link |
02:57:25.700
A lot of people are economy class, business class,
link |
02:57:28.580
and very few are first class and some are budget.
link |
02:57:30.900
I mean, some, I understand.
link |
02:57:32.860
I think there's, yeah, maybe some people,
link |
02:57:36.940
probably I would push back
link |
02:57:38.140
against that definition of the middle class.
link |
02:57:39.660
It does feel like the middle class is pretty large,
link |
02:57:41.460
but yes, there's a discrepancy in terms of wealth.
link |
02:57:45.740
So if you think about in terms of the productivity
link |
02:57:48.620
that our society could have,
link |
02:57:50.900
there is no reason for anybody to fly economy, right?
link |
02:57:53.980
We would be able to let everybody travel in style.
link |
02:57:57.940
Well, but also some people like to be frugal
link |
02:58:00.220
even when they're billionaires, okay?
link |
02:58:01.620
So like that, let's take that into account.
link |
02:58:04.580
I mean, we probably don't need to be a traveling lavish,
link |
02:58:07.260
but you also don't need to be tortured, right?
link |
02:58:09.780
There is a difference between frugal
link |
02:58:11.820
and subjecting yourself to torture.
link |
02:58:14.140
Listen, I love economy.
link |
02:58:15.220
I don't understand why you're comparing
link |
02:58:16.780
a fly economy to torture.
link |
02:58:19.420
I don't, although the fight here,
link |
02:58:22.500
there's two crying babies next to me.
link |
02:58:24.380
So that, but that has nothing to do with economy.
link |
02:58:26.460
It has to do with crying babies.
link |
02:58:28.540
They're very cute though.
link |
02:58:29.380
So they kind of.
link |
02:58:30.220
Yeah, I have two kids
link |
02:58:31.260
and sometimes I have to go back to visit the grandparents.
link |
02:58:35.020
And that means going from the west coast to Germany
link |
02:58:41.300
and that's a long flight.
link |
02:58:42.700
Is it true that, so when you're a father,
link |
02:58:45.300
you grow immune to the crying and all that kind of stuff,
link |
02:58:48.540
like the, because like me just not having kids,
link |
02:58:52.260
it can be other people's kids can be quite annoying
link |
02:58:54.620
when they're crying and screaming
link |
02:58:55.820
and all that kind of stuff.
link |
02:58:57.220
When you have children and you are wired up
link |
02:58:59.540
in the default natural way,
link |
02:59:01.460
you're lucky in this regard, you fall in love with them.
link |
02:59:04.340
And this falling in love with them means
link |
02:59:06.980
that you basically start to see the world through their eyes
link |
02:59:10.180
and you understand that in a given situation,
link |
02:59:12.500
they cannot do anything but being expressing despair.
link |
02:59:17.740
And so it becomes more differentiated.
link |
02:59:19.700
I noticed that for instance,
link |
02:59:21.020
my son is typically acting on a pure experience
link |
02:59:25.940
of what things are like right now
link |
02:59:28.540
and he has to do this right now.
link |
02:59:30.380
And you have this small child that is,
link |
02:59:33.740
when he was a baby and so on,
link |
02:59:35.020
where he was just immediately expressing what he felt.
link |
02:59:37.580
And if you cannot regulate this from the outside,
link |
02:59:39.940
there's no point to be upset about it, right?
link |
02:59:42.260
It's like dealing with weather or something like this.
link |
02:59:45.060
You all have to get through it
link |
02:59:46.620
and it's not easy for him either.
link |
02:59:48.620
But if you also have a daughter,
link |
02:59:51.820
maybe she is planning for that.
link |
02:59:53.300
Maybe she understands that she's sitting in the car
link |
02:59:57.420
behind you and she's screaming at the top of her lungs
link |
02:59:59.860
and you're almost doing an accident
link |
03:00:01.820
and you really don't know what to do.
link |
03:00:03.740
What should I have done to make you stop screaming?
link |
03:00:06.380
You could have given me candy.
link |
03:00:10.020
I think that's like a cat versus dog discussion.
link |
03:00:12.140
I love it.
link |
03:00:13.940
Cause you said like a fundamental aspect of that is love
link |
03:00:19.420
that makes it all worth it.
link |
03:00:21.220
What, in this monkey riding an elephant in a dream world,
link |
03:00:26.740
what role does love play in the human condition?
link |
03:00:31.540
I think that love is the facilitator
link |
03:00:33.540
of non transactional interaction.
link |
03:00:37.140
And you are observing your own purposes.
link |
03:00:40.140
Some of these purposes go beyond your ego.
link |
03:00:42.460
They go beyond the particular organism
link |
03:00:45.140
that you are and your local interests.
link |
03:00:46.780
That's what you mean by non transactional.
link |
03:00:48.540
Yes, so basically when you are acting
link |
03:00:50.180
in a transactional way, it means that you are respecting
link |
03:00:52.860
something in return for you
link |
03:00:55.420
from the one that you're interacting with.
link |
03:00:58.060
You are interacting with a random stranger,
link |
03:00:59.860
you buy something from them on eBay,
link |
03:01:01.340
you expect a fair value for the money that you sent them
link |
03:01:03.900
and vice versa.
link |
03:01:05.420
Because you don't know that person,
link |
03:01:06.580
you don't have any kind of relationship to them.
link |
03:01:09.020
But when you know this person a little bit better
link |
03:01:10.660
and you know the situation that they're in,
link |
03:01:12.740
you understand what they try to achieve in their life
link |
03:01:14.940
and you approve because you realize that they're
link |
03:01:17.700
in some sense serving the same human sacredness as you are.
link |
03:01:22.420
And they need to think that you have,
link |
03:01:23.820
maybe you give it to them as a present.
link |
03:01:26.700
But, I mean, the feeling itself of joy is a kind of benefit,
link |
03:01:32.180
is a kind of transaction, like...
link |
03:01:34.660
Yes, but the joy is not the point.
link |
03:01:36.500
The joy is the signal that you get.
link |
03:01:38.460
It's the reinforcement signal that your brain sends to you
link |
03:01:40.900
because you are acting on the incentives
link |
03:01:43.740
of the agent that you're a part of.
link |
03:01:45.740
We are meant to be part of something larger.
link |
03:01:48.500
This is the way in which we out competed other hominins.
link |
03:01:54.100
Take that Neanderthals.
link |
03:01:56.420
Yeah, right.
link |
03:01:57.420
And also other humans.
link |
03:01:59.620
There was a population bottleneck for human society
link |
03:02:03.100
that leads to an extreme lack of genetic diversity
link |
03:02:06.900
among humans.
link |
03:02:07.740
If you look at Bushmen in the Kalahari,
link |
03:02:11.300
that basically tribes that are not that far distant
link |
03:02:13.900
to each other have more genetic diversity
link |
03:02:15.860
than exists between Europeans and Chinese.
link |
03:02:19.740
And that's because basically the out of Africa population
link |
03:02:23.460
at some point had a bottleneck
link |
03:02:25.060
of just a few thousand individuals.
link |
03:02:27.740
And what probably happened is not that at any time
link |
03:02:30.980
the number of people shrank below a few hundred thousand.
link |
03:02:34.740
What probably happened is that there was a small group
link |
03:02:37.580
that had a decisive mutation that produced an advantage.
link |
03:02:40.460
And this group multiplied and killed everybody else.
link |
03:02:44.100
And we are descendants of that group.
link |
03:02:46.140
Yeah, I wonder what the peculiar characteristics
link |
03:02:50.780
of that group.
link |
03:02:52.140
Yeah.
link |
03:02:53.100
I mean, we can never know.
link |
03:02:53.940
Me too, and a lot of people do.
link |
03:02:55.460
We can only just listen to the echoes in ours,
link |
03:02:58.220
like the ripples that are still within us.
link |
03:03:01.660
So I suspect what eventually made a big difference
link |
03:03:04.420
was the ability to organize at scale,
link |
03:03:07.580
to program each other.
link |
03:03:09.260
With ideas.
link |
03:03:11.380
That we became programmable,
link |
03:03:12.620
that we were willing to work in lockstep,
link |
03:03:14.500
that we went above the tribal level,
link |
03:03:17.420
that we no longer were groups of a few hundred individuals
link |
03:03:20.700
and acted on direct reputation systems transactionally,
link |
03:03:24.460
but that we basically evolved an adaptation
link |
03:03:27.420
to become state building.
link |
03:03:28.980
Yeah.
link |
03:03:31.740
To form collectives outside of the direct collectives.
link |
03:03:35.700
Yes, and that's basically a part of us became committed
link |
03:03:38.580
to serving something outside of what we know.
link |
03:03:41.940
Yeah, then that's kind of what love is.
link |
03:03:44.140
And it's terrifying because it meant
link |
03:03:45.820
that we eradicated the others.
link |
03:03:48.900
Right, it's a force.
link |
03:03:49.820
It's an adaptive force that gets us ahead in evolution,
link |
03:03:52.940
which means we displace something else
link |
03:03:54.540
that doesn't have that.
link |
03:03:56.780
Oh, so we had to murder a lot of people
link |
03:03:58.740
that weren't about love.
link |
03:04:00.380
So love led to destruction.
link |
03:04:01.660
They didn't have the same strong love as we did.
link |
03:04:04.020
Right, that's why I mentioned this thing with fascism.
link |
03:04:07.420
When you see these speeches, do you want total war?
link |
03:04:12.220
And everybody says, yes, right?
link |
03:04:14.180
This is this big, oh my God, we are part of something
link |
03:04:17.620
that is more important than me
link |
03:04:18.660
that gives meaning to my existence.
link |
03:04:22.980
Fair enough.
link |
03:04:27.020
Do you have advice for young people today
link |
03:04:30.980
in high school, in college,
link |
03:04:33.140
that are thinking about what to do with their career,
link |
03:04:37.260
with their life, so that at the end of the whole thing,
link |
03:04:40.420
they can be proud of what they did?
link |
03:04:43.820
Don't cheat.
link |
03:04:45.860
Have integrity, aim for integrity.
link |
03:04:48.540
So what does integrity look like when you're at the river
link |
03:04:50.860
or the leaf or the fat frog in a lake?
link |
03:04:54.580
It basically means that you try to figure out
link |
03:04:57.700
what the thing is that is the most right.
link |
03:05:02.060
And this doesn't mean that you have to look
link |
03:05:04.620
for what other people tell you what's right,
link |
03:05:07.140
but you have to aim for moral autonomy.
link |
03:05:09.740
So things need to be right independently
link |
03:05:12.220
of what other people say.
link |
03:05:14.100
I always felt that when people told me
link |
03:05:17.620
to listen to what others say, like read the room,
link |
03:05:22.940
build your ideas of what's true
link |
03:05:25.060
based on the high status people of your in group,
link |
03:05:27.020
that does not protect me from fascism.
link |
03:05:29.780
The only way to protect yourself from fascism
link |
03:05:31.940
is to decide it's the world that is being built here,
link |
03:05:35.580
the world that I want to be in.
link |
03:05:37.620
And so in some sense, try to make your behavior sustainable,
link |
03:05:41.740
act in such a way that you would feel comfortable
link |
03:05:44.540
on all sides of the transaction.
link |
03:05:46.420
Realize that everybody is you in a different timeline,
link |
03:05:48.900
but is seeing things differently
link |
03:05:51.140
and has reasons to do so.
link |
03:05:53.940
Yeah, I've come to realize this recently,
link |
03:05:58.100
that there is an inner voice
link |
03:05:59.340
that tells you what's right and wrong.
link |
03:06:02.820
And speaking of reading the room,
link |
03:06:06.180
there's times what integrity looks like
link |
03:06:08.060
is there's times when a lot of people
link |
03:06:10.460
are doing something wrong.
link |
03:06:12.180
And what integrity looks like
link |
03:06:13.740
is not going on Twitter and tweeting about it,
link |
03:06:16.500
but not participating quietly, not doing.
link |
03:06:20.260
So it's not like signaling or not all this kind of stuff,
link |
03:06:24.100
but actually living your, what you think is right.
link |
03:06:28.020
Like living it, not signaling.
link |
03:06:28.860
There's also sometimes this expectation
link |
03:06:30.980
that others are like us.
link |
03:06:32.260
So imagine the possibility
link |
03:06:34.380
that some of the people around you are space aliens
link |
03:06:37.060
that only look human, right?
link |
03:06:39.500
So they don't have the same prayers as you do.
link |
03:06:41.620
They don't have the same impulses
link |
03:06:44.100
that's what's right and wrong.
link |
03:06:45.180
There's a large diversity in these basic impulses
link |
03:06:48.900
that people can have in a given situation.
link |
03:06:51.820
And now realize that you are a space alien, right?
link |
03:06:54.660
You are not actually human.
link |
03:06:55.900
You think that you are human,
link |
03:06:57.220
but you don't know what it means,
link |
03:06:58.820
like what it's like to be human.
link |
03:07:00.780
You just make it up as you go along like everybody else.
link |
03:07:04.020
And you have to figure that out,
link |
03:07:05.740
what it means that you are a full human being,
link |
03:07:09.620
what it means to be human in the world
link |
03:07:11.180
and how to connect with others on that.
link |
03:07:13.540
And there is also something, don't be afraid
link |
03:07:17.340
in the sense that if you do this, you're not good enough.
link |
03:07:20.980
Because if you are acting on these incentives of integrity,
link |
03:07:23.580
you become trustworthy.
link |
03:07:25.140
That's the way in which you can recognize each other.
link |
03:07:28.420
There is a particular place where you can meet.
link |
03:07:30.700
You can figure out what that place is,
link |
03:07:33.060
where you will give support to people
link |
03:07:35.420
because you realize that they act with integrity
link |
03:07:38.420
and they will also do that.
link |
03:07:40.300
So in some sense, you are safe if you do that.
link |
03:07:43.860
You're not always protected.
link |
03:07:44.940
There are people which will abuse you
link |
03:07:47.100
and that are bad actors in a way
link |
03:07:49.940
that it's hard to imagine before you meet them.
link |
03:07:52.780
But there is also people which will try to protect you.
link |
03:07:57.780
Yeah, that's such a, thank you for saying that.
link |
03:08:00.820
That's such a hopeful message
link |
03:08:03.820
that no matter what happens to you,
link |
03:08:05.380
there'll be a place, there's people you'll meet
link |
03:08:11.740
that also have what you have
link |
03:08:15.620
and you will find happiness there and safety there.
link |
03:08:20.180
Yeah, but it doesn't need to end well.
link |
03:08:21.700
It can also all go wrong.
link |
03:08:23.500
So there's no guarantees in this life.
link |
03:08:26.380
So you can do everything right and you still can fail
link |
03:08:29.380
and you can see horrible things happening to you
link |
03:08:32.500
that traumatize you and mutilate you
link |
03:08:35.060
and you have to be grateful if it doesn't happen.
link |
03:08:40.300
And ultimately be grateful no matter what happens
link |
03:08:42.940
because even just being alive is pretty damn nice.
link |
03:08:46.860
Yeah, even that, you know.
link |
03:08:49.580
The gratefulness in some sense is also just generated
link |
03:08:52.260
by your brain to keep you going, it's all the trick.
link |
03:08:58.900
Speaking of which, Camus said,
link |
03:09:02.900
I see many people die because they judge
link |
03:09:05.540
that life is not worth living.
link |
03:09:08.020
I see others paradoxically getting killed
link |
03:09:10.820
for the ideas or illusions that give them
link |
03:09:12.860
a reason for living.
link |
03:09:15.020
What is called the reason for living
link |
03:09:16.420
is also an excellent reason for dying.
link |
03:09:19.420
I therefore conclude that the meaning of life
link |
03:09:22.020
is the most urgent of questions.
link |
03:09:24.660
So I have to ask what Jascha Bach is the meaning of life?
link |
03:09:31.500
It is an urgent question according to Camus.
link |
03:09:35.260
I don't think that there's a single answer to this.
link |
03:09:37.940
Nothing makes sense unless the mind makes it so.
link |
03:09:41.340
So you basically have to project a purpose.
link |
03:09:44.820
And if you zoom out far enough,
link |
03:09:47.380
there's the heat test of the universe
link |
03:09:49.060
and everything is meaningless,
link |
03:09:50.500
everything is just a blip in between.
link |
03:09:52.100
And the question is, do you find meaning
link |
03:09:54.020
in this blip in between?
link |
03:09:55.820
Do you find meaning in observing squirrels?
link |
03:09:59.780
Do you find meaning in raising children
link |
03:10:01.740
and projecting a multi generational organism
link |
03:10:04.420
into the future?
link |
03:10:05.660
Do you find meaning in projecting an aesthetic
link |
03:10:08.260
of the world that you like to the future
link |
03:10:10.620
and trying to serve that aesthetic?
link |
03:10:12.340
And if you do, then life has that meaning.
link |
03:10:15.300
And if you don't, then it doesn't.
link |
03:10:17.140
I kind of enjoy the idea that you just create
link |
03:10:21.780
the most vibrant, the most weird,
link |
03:10:25.660
the most unique kind of blip you can,
link |
03:10:28.740
given your environment, given your set of skills,
link |
03:10:32.020
just be the most weird set of,
link |
03:10:38.740
like local pocket of complexity you can be.
link |
03:10:41.740
So that like, when people study the universe,
link |
03:10:44.500
they'll pause and be like, oh, that's weird.
link |
03:10:47.340
It looks like a useful strategy,
link |
03:10:50.580
but of course it's still motivated reasoning.
link |
03:10:52.780
You're obviously acting on your incentives here.
link |
03:10:57.780
It's still a story we tell ourselves within a dream
link |
03:11:00.700
that's hardly in touch with the reality.
link |
03:11:03.860
It's definitely a good strategy if you are a podcaster.
link |
03:11:10.180
And a human, which I'm still trying to figure out if I am.
link |
03:11:13.060
It has a mutual relationship somehow.
link |
03:11:15.020
Somehow.
link |
03:11:16.060
Josh, you're one of the most incredible people I know.
link |
03:11:20.860
I really love talking to you.
link |
03:11:22.380
I love talking to you again,
link |
03:11:23.500
and it's really an honor that you spend
link |
03:11:26.060
your valuable time with me.
link |
03:11:27.100
I hope we get to talk many times
link |
03:11:28.580
through our short and meaningless lives.
link |
03:11:33.580
Or meaningful.
link |
03:11:34.620
Or meaningful.
link |
03:11:35.900
Thank you, Alex.
link |
03:11:36.740
I enjoyed this conversation very much.
link |
03:11:39.020
Thanks for listening to this conversation with Josche Bach.
link |
03:11:41.700
A thank you to Coinbase, Codecademy, Linode,
link |
03:11:45.900
NetSuite, and ExpressVPN.
link |
03:11:48.500
Check them out in the description to support this podcast.
link |
03:11:52.020
Now, let me leave you with some words from Carl Jung.
link |
03:11:55.780
People will do anything, no matter how absurd,
link |
03:11:59.020
in order to avoid facing their own souls.
link |
03:12:01.780
One does not become enlightened
link |
03:12:03.580
by imagining figures of light,
link |
03:12:05.780
but by making the darkness conscious.
link |
03:12:09.260
Thank you for listening, and hope to see you next time.