back to index

Max Tegmark: AI and Physics | Lex Fridman Podcast #155


small model | large model

link |
00:00:00.000
The following is a conversation with Max Tegmark,
link |
00:00:02.840
his second time on the podcast.
link |
00:00:04.760
In fact, the previous conversation
link |
00:00:07.120
was episode number one of this very podcast.
link |
00:00:10.960
He is a physicist and artificial intelligence researcher
link |
00:00:14.800
at MIT, cofounder of the Future of Life Institute,
link |
00:00:18.840
and author of Life 3.0,
link |
00:00:21.360
Being Human in the Age of Artificial Intelligence.
link |
00:00:24.560
He's also the head of a bunch of other huge,
link |
00:00:27.120
fascinating projects and has written
link |
00:00:29.240
a lot of different things
link |
00:00:30.560
that you should definitely check out.
link |
00:00:32.080
He has been one of the key humans
link |
00:00:34.480
who has been outspoken about longterm existential risks
link |
00:00:37.400
of AI and also its exciting possibilities
link |
00:00:40.500
and solutions to real world problems.
link |
00:00:42.900
Most recently at the intersection of AI and physics,
link |
00:00:46.440
and also in reengineering the algorithms
link |
00:00:50.000
that divide us by controlling the information we see
link |
00:00:53.200
and thereby creating bubbles and all other kinds
link |
00:00:56.160
of complex social phenomena that we see today.
link |
00:00:59.640
In general, he's one of the most passionate
link |
00:01:01.440
and brilliant people I have the fortune of knowing.
link |
00:01:04.340
I hope to talk to him many more times
link |
00:01:06.180
on this podcast in the future.
link |
00:01:08.280
Quick mention of our sponsors,
link |
00:01:10.000
The Jordan Harbinger Show,
link |
00:01:12.160
Four Sigmatic Mushroom Coffee,
link |
00:01:14.360
BetterHelp Online Therapy, and ExpressVPN.
link |
00:01:18.480
So the choices, wisdom, caffeine, sanity, or privacy.
link |
00:01:23.560
Choose wisely, my friends, and if you wish,
link |
00:01:25.880
click the sponsor links below to get a discount
link |
00:01:28.360
and to support this podcast.
link |
00:01:30.560
As a side note, let me say that much of the researchers
link |
00:01:33.900
in the machine learning
link |
00:01:35.400
and artificial intelligence communities
link |
00:01:37.760
do not spend much time thinking deeply
link |
00:01:40.400
about existential risks of AI.
link |
00:01:42.720
Because our current algorithms are seen as useful but dumb,
link |
00:01:46.160
it's difficult to imagine how they may become destructive
link |
00:01:49.240
to the fabric of human civilization
link |
00:01:51.240
in the foreseeable future.
link |
00:01:53.040
I understand this mindset, but it's very troublesome.
link |
00:01:56.120
To me, this is both a dangerous and uninspiring perspective,
link |
00:02:00.480
reminiscent of a lobster sitting in a pot of lukewarm water
link |
00:02:03.980
that a minute ago was cold.
link |
00:02:06.160
I feel a kinship with this lobster.
link |
00:02:08.640
I believe that already the algorithms
link |
00:02:10.560
that drive our interaction on social media
link |
00:02:12.960
have an intelligence and power
link |
00:02:14.980
that far outstrip the intelligence and power
link |
00:02:17.360
of any one human being.
link |
00:02:19.220
Now really is the time to think about this,
link |
00:02:21.640
to define the trajectory of the interplay
link |
00:02:24.140
of technology and human beings in our society.
link |
00:02:26.940
I think that the future of human civilization
link |
00:02:29.680
very well may be at stake over this very question
link |
00:02:32.820
of the role of artificial intelligence in our society.
link |
00:02:36.240
If you enjoy this thing, subscribe on YouTube,
link |
00:02:38.160
review it on Apple Podcasts, follow on Spotify,
link |
00:02:40.960
support on Patreon, or connect with me on Twitter
link |
00:02:43.840
at Lex Friedman.
link |
00:02:45.260
And now, here's my conversation with Max Tegmark.
link |
00:02:49.880
So people might not know this,
link |
00:02:51.440
but you were actually episode number one of this podcast
link |
00:02:55.280
just a couple of years ago, and now we're back.
link |
00:02:59.280
And it so happens that a lot of exciting things happened
link |
00:03:02.280
in both physics and artificial intelligence,
link |
00:03:05.600
both fields that you're super passionate about.
link |
00:03:08.480
Can we try to catch up to some of the exciting things
link |
00:03:11.640
happening in artificial intelligence,
link |
00:03:14.080
especially in the context of the way it's cracking,
link |
00:03:17.340
open the different problems of the sciences?
link |
00:03:20.020
Yeah, I'd love to, especially now as we start 2021 here,
link |
00:03:24.520
it's a really fun time to think about
link |
00:03:26.280
what were the biggest breakthroughs in AI,
link |
00:03:29.560
not the ones necessarily that media wrote about,
link |
00:03:31.800
but that really matter, and what does that mean
link |
00:03:35.200
for our ability to do better science?
link |
00:03:37.440
What does it mean for our ability
link |
00:03:39.920
to help people around the world?
link |
00:03:43.160
And what does it mean for new problems
link |
00:03:46.440
that they could cause if we're not smart enough
link |
00:03:48.440
to avoid them, so what do we learn basically from this?
link |
00:03:51.880
Yes, absolutely.
link |
00:03:52.720
So one of the amazing things you're a part of
link |
00:03:54.960
is the AI Institute for Artificial Intelligence
link |
00:03:57.680
and Fundamental Interactions.
link |
00:04:00.160
What's up with this institute?
link |
00:04:02.280
What are you working on?
link |
00:04:03.680
What are you thinking about?
link |
00:04:05.000
The idea is something I'm very on fire with,
link |
00:04:09.080
which is basically AI meets physics.
link |
00:04:11.920
And it's been almost five years now
link |
00:04:15.360
since I shifted my own MIT research
link |
00:04:18.360
from physics to machine learning.
link |
00:04:20.680
And in the beginning, I noticed that a lot of my colleagues,
link |
00:04:22.880
even though they were polite about it,
link |
00:04:24.280
were like kind of, what is Max doing?
link |
00:04:27.760
What is this weird stuff?
link |
00:04:29.040
He's lost his mind.
link |
00:04:30.880
But then gradually, I, together with some colleagues,
link |
00:04:35.080
were able to persuade more and more of the other professors
link |
00:04:40.080
in our physics department to get interested in this.
link |
00:04:42.520
And now we've got this amazing NSF Center,
link |
00:04:46.320
so 20 million bucks for the next five years, MIT,
link |
00:04:50.000
and a bunch of neighboring universities here also.
link |
00:04:53.200
And I noticed now those colleagues
link |
00:04:55.280
who were looking at me funny have stopped
link |
00:04:57.040
asking what the point is of this,
link |
00:05:00.320
because it's becoming more clear.
link |
00:05:02.400
And I really believe that, of course,
link |
00:05:05.560
AI can help physics a lot to do better physics.
link |
00:05:09.440
But physics can also help AI a lot,
link |
00:05:13.160
both by building better hardware.
link |
00:05:16.440
My colleague, Marin Soljacic, for example,
link |
00:05:18.840
is working on an optical chip for much faster machine
link |
00:05:23.000
learning, where the computation is done
link |
00:05:25.360
not by moving electrons around, but by moving photons around,
link |
00:05:30.240
dramatically less energy use, faster, better.
link |
00:05:34.240
We can also help AI a lot, I think,
link |
00:05:37.560
by having a different set of tools
link |
00:05:42.840
and a different, maybe more audacious attitude.
link |
00:05:46.440
AI has, to a significant extent, been an engineering discipline
link |
00:05:51.560
where you're just trying to make things that work
link |
00:05:54.240
and being more interested in maybe selling them
link |
00:05:56.560
than in figuring out exactly how they work
link |
00:06:00.280
and proving theorems about that they will always work.
link |
00:06:03.680
Contrast that with physics.
link |
00:06:05.240
When Elon Musk sends a rocket to the International Space
link |
00:06:08.920
Station, they didn't just train with machine learning.
link |
00:06:12.080
Oh, let's fire it a little bit more to the left,
link |
00:06:14.080
a bit more to the right.
link |
00:06:14.920
Oh, that also missed.
link |
00:06:15.800
Let's try here.
link |
00:06:16.920
No, we figured out Newton's laws of gravitation and other things
link |
00:06:23.400
and got a really deep fundamental understanding.
link |
00:06:26.800
And that's what gives us such confidence in rockets.
link |
00:06:30.840
And my vision is that in the future,
link |
00:06:36.040
all machine learning systems that actually have impact
link |
00:06:38.840
on people's lives will be understood
link |
00:06:40.960
at a really, really deep level.
link |
00:06:43.120
So we trust them, not because some sales rep told us to,
link |
00:06:46.960
but because they've earned our trust.
link |
00:06:50.360
And really safety critical things
link |
00:06:51.800
even prove that they will always do what we expect them to do.
link |
00:06:55.600
That's very much the physics mindset.
link |
00:06:57.040
So it's interesting, if you look at big breakthroughs
link |
00:07:00.160
that have happened in machine learning this year,
link |
00:07:03.680
from dancing robots, it's pretty fantastic.
link |
00:07:08.200
Not just because it's cool, but if you just
link |
00:07:10.280
think about not that many years ago,
link |
00:07:12.880
this YouTube video at this DARPA challenge with the MIT robot
link |
00:07:16.680
comes out of the car and face plants.
link |
00:07:20.680
How far we've come in just a few years.
link |
00:07:23.840
Similarly, Alpha Fold 2, crushing the protein folding
link |
00:07:30.360
problem.
link |
00:07:31.160
We can talk more about implications
link |
00:07:33.080
for medical research and stuff.
link |
00:07:34.400
But hey, that's huge progress.
link |
00:07:39.240
You can look at the GPT3 that can spout off
link |
00:07:44.120
English text, which sometimes really, really blows you away.
link |
00:07:48.840
You can look at DeepMind's MuZero,
link |
00:07:52.920
which doesn't just kick our butt in Go and Chess and Shogi,
link |
00:07:57.920
but also in all these Atari games.
link |
00:07:59.760
And you don't even have to teach it the rules now.
link |
00:08:02.960
What all of those have in common is, besides being powerful,
link |
00:08:06.920
is we don't fully understand how they work.
link |
00:08:10.520
And that's fine if it's just some dancing robots.
link |
00:08:13.160
And the worst thing that can happen is they face plant.
link |
00:08:16.440
Or if they're playing Go, and the worst thing that can happen
link |
00:08:19.120
is that they make a bad move and lose the game.
link |
00:08:22.240
It's less fine if that's what's controlling
link |
00:08:25.400
your self driving car or your nuclear power plant.
link |
00:08:29.120
And we've seen already that even though Hollywood
link |
00:08:33.600
had all these movies where they try
link |
00:08:35.040
to make us worry about the wrong things,
link |
00:08:37.000
like machines turning evil, the actual bad things that
link |
00:08:41.080
have happened with automation have not
link |
00:08:43.480
been machines turning evil.
link |
00:08:45.440
They've been caused by overtrust in things
link |
00:08:48.840
we didn't understand as well as we thought we did.
link |
00:08:51.440
Even very simple automated systems
link |
00:08:54.320
like what Boeing put into the 737 MAX killed a lot of people.
link |
00:09:00.440
Was it that that little simple system was evil?
link |
00:09:02.960
Of course not.
link |
00:09:03.920
But we didn't understand it as well as we should have.
link |
00:09:07.400
And we trusted without understanding.
link |
00:09:10.640
Exactly.
link |
00:09:11.440
That's the overtrust.
link |
00:09:12.440
We didn't even understand that we didn't understand.
link |
00:09:15.720
The humility is really at the core of being a scientist.
link |
00:09:19.880
I think step one, if you want to be a scientist,
link |
00:09:21.880
is don't ever fool yourself into thinking you understand things
link |
00:09:25.000
when you actually don't.
link |
00:09:27.080
That's probably good advice for humans in general.
link |
00:09:29.480
I think humility in general can do us good.
link |
00:09:31.320
But in science, it's so spectacular.
link |
00:09:33.240
Why did we have the wrong theory of gravity
link |
00:09:35.880
ever from Aristotle onward until Galileo's time?
link |
00:09:40.520
Why would we believe something so dumb as that if I throw
link |
00:09:43.680
this water bottle, it's going to go up with constant speed
link |
00:09:47.280
until it realizes that its natural motion is down?
link |
00:09:49.680
It changes its mind.
link |
00:09:51.040
Because people just kind of assumed Aristotle was right.
link |
00:09:55.320
He's an authority.
link |
00:09:56.120
We understand that.
link |
00:09:57.720
Why did we believe things like that the sun is
link |
00:09:59.920
going around the Earth?
link |
00:10:01.880
Why did we believe that time flows
link |
00:10:04.000
at the same rate for everyone until Einstein?
link |
00:10:06.440
Same exact mistake over and over again.
link |
00:10:08.560
We just weren't humble enough to acknowledge that we actually
link |
00:10:12.320
didn't know for sure.
link |
00:10:13.920
We assumed we knew.
link |
00:10:15.720
So we didn't discover the truth because we
link |
00:10:17.760
assumed there was nothing there to be discovered, right?
link |
00:10:20.560
There was something to be discovered about the 737 Max.
link |
00:10:24.400
And if you had been a bit more suspicious
link |
00:10:26.480
and tested it better, we would have found it.
link |
00:10:28.680
And it's the same thing with most harm
link |
00:10:30.600
that's been done by automation so far, I would say.
link |
00:10:33.760
So I don't know if you heard here of a company called
link |
00:10:35.720
Knight Capital?
link |
00:10:38.000
So good.
link |
00:10:38.760
That means you didn't invest in them earlier.
link |
00:10:42.080
They deployed this automated trading system,
link |
00:10:45.560
all nice and shiny.
link |
00:10:47.000
They didn't understand it as well as they thought.
link |
00:10:49.480
And it went about losing $10 million
link |
00:10:51.320
per minute for 44 minutes straight
link |
00:10:55.520
until someone presumably was like, oh, no, shut this up.
link |
00:10:59.520
Was it evil?
link |
00:11:00.480
No.
link |
00:11:01.040
It was, again, misplaced trust, something they didn't fully
link |
00:11:04.400
understand, right?
link |
00:11:05.240
And there have been so many, even when people
link |
00:11:09.040
have been killed by robots, which is quite rare still,
link |
00:11:12.640
but in factory accidents, it's in every single case
link |
00:11:15.680
been not malice, just that the robot didn't understand
link |
00:11:19.080
that a human is different from an auto part or whatever.
link |
00:11:24.400
So this is why I think there's so much opportunity
link |
00:11:28.000
for a physics approach, where you just aim for a higher
link |
00:11:32.040
level of understanding.
link |
00:11:33.600
And if you look at all these systems
link |
00:11:36.200
that we talked about from reinforcement learning
link |
00:11:40.680
systems and dancing robots to all these neural networks
link |
00:11:44.240
that power GPT3 and go playing software and stuff,
link |
00:11:49.600
they're all basically black boxes,
link |
00:11:53.480
not so different from if you teach a human something,
link |
00:11:55.920
you have no idea how their brain works, right?
link |
00:11:58.120
Except the human brain, at least,
link |
00:11:59.960
has been error corrected during many, many centuries
link |
00:12:03.800
of evolution in a way that some of these systems have not,
link |
00:12:06.560
right?
link |
00:12:07.560
And my MIT research is entirely focused
link |
00:12:10.640
on demystifying this black box, intelligible intelligence
link |
00:12:14.440
is my slogan.
link |
00:12:15.960
That's a good line, intelligible intelligence.
link |
00:12:18.440
Yeah, that we shouldn't settle for something
link |
00:12:20.360
that seems intelligent, but it should
link |
00:12:22.160
be intelligible so that we actually trust it
link |
00:12:24.280
because we understand it, right?
link |
00:12:26.640
Like, again, Elon trusts his rockets
link |
00:12:28.880
because he understands Newton's laws and thrust
link |
00:12:31.600
and how everything works.
link |
00:12:33.800
And can I tell you why I'm optimistic about this?
link |
00:12:36.880
Yes.
link |
00:12:37.520
I think we've made a bit of a mistake
link |
00:12:41.280
where some people still think that somehow we're never going
link |
00:12:44.880
to understand neural networks.
link |
00:12:47.320
We're just going to have to learn to live with this.
link |
00:12:49.520
It's this very powerful black box.
link |
00:12:52.240
Basically, for those who haven't spent time
link |
00:12:55.840
building their own, it's super simple what happens inside.
link |
00:12:59.000
You send in a long list of numbers,
link |
00:13:01.280
and then you do a bunch of operations on them,
link |
00:13:04.880
multiply by matrices, et cetera, et cetera,
link |
00:13:06.880
and some other numbers come out that's output of it.
link |
00:13:09.840
And then there are a bunch of knobs you can tune.
link |
00:13:13.520
And when you change them, it affects the computation,
link |
00:13:16.680
the input output relation.
link |
00:13:18.080
And then you just give the computer
link |
00:13:19.560
some definition of good, and it keeps optimizing these knobs
link |
00:13:22.680
until it performs as good as possible.
link |
00:13:24.760
And often, you go like, wow, that's really good.
link |
00:13:27.160
This robot can dance, or this machine
link |
00:13:29.480
is beating me at chess now.
link |
00:13:31.960
And in the end, you have something
link |
00:13:33.400
which, even though you can look inside it,
link |
00:13:35.240
you have very little idea of how it works.
link |
00:13:38.680
You can print out tables of all the millions of parameters
link |
00:13:42.040
in there.
link |
00:13:43.240
Is it crystal clear now how it's working?
link |
00:13:45.000
No, of course not.
link |
00:13:46.840
Many of my colleagues seem willing to settle for that.
link |
00:13:49.080
And I'm like, no, that's like the halfway point.
link |
00:13:54.360
Some have even gone as far as sort of guessing
link |
00:13:57.560
that the mistrutability of this is
link |
00:14:00.800
where some of the power comes from,
link |
00:14:02.760
and some sort of mysticism.
link |
00:14:05.120
I think that's total nonsense.
link |
00:14:06.840
I think the real power of neural networks
link |
00:14:10.240
comes not from inscrutability, but from differentiability.
link |
00:14:15.040
And what I mean by that is simply
link |
00:14:17.640
that the output changes only smoothly if you tweak your knobs.
link |
00:14:23.880
And then you can use all these powerful methods
link |
00:14:26.640
we have for optimization in science.
link |
00:14:28.320
We can just tweak them a little bit and see,
link |
00:14:30.160
did that get better or worse?
link |
00:14:31.680
That's the fundamental idea of machine learning,
link |
00:14:33.920
that the machine itself can keep optimizing
link |
00:14:36.080
until it gets better.
link |
00:14:37.240
Suppose you wrote this algorithm instead in Python
link |
00:14:41.920
or some other programming language,
link |
00:14:43.720
and then what the knobs did was they just changed
link |
00:14:46.280
random letters in your code.
link |
00:14:49.920
Now it would just epically fail.
link |
00:14:51.440
You change one thing, and instead of saying print,
link |
00:14:53.560
it says, synth, syntax error.
link |
00:14:56.840
You don't even know, was that for the better
link |
00:14:58.720
or for the worse, right?
link |
00:14:59.920
This, to me, is what I believe is
link |
00:15:02.720
the fundamental power of neural networks.
link |
00:15:05.240
And just to clarify, the changing
link |
00:15:06.640
of the different letters in a program
link |
00:15:08.400
would not be a differentiable process.
link |
00:15:10.600
It would make it an invalid program, typically.
link |
00:15:13.760
And then you wouldn't even know if you changed more letters
link |
00:15:16.800
if it would make it work again, right?
link |
00:15:18.560
So that's the magic of neural networks, the inscrutability.
link |
00:15:23.360
The differentiability, that every setting of the parameters
link |
00:15:26.600
is a program, and you can tell is it better or worse, right?
link |
00:15:29.040
And so.
link |
00:15:31.040
So you don't like the poetry of the mystery of neural networks
link |
00:15:33.680
as the source of its power?
link |
00:15:35.120
I generally like poetry, but.
link |
00:15:37.880
Not in this case.
link |
00:15:39.200
It's so misleading.
link |
00:15:40.440
And above all, it shortchanges us.
link |
00:15:42.880
It makes us underestimate the good things
link |
00:15:46.440
we can accomplish.
link |
00:15:47.880
So what we've been doing in my group
link |
00:15:49.400
is basically step one, train the mysterious neural network
link |
00:15:53.000
to do something well.
link |
00:15:54.920
And then step two, do some additional AI techniques
link |
00:15:59.560
to see if we can now transform this black box into something
link |
00:16:03.280
equally intelligent that you can actually understand.
link |
00:16:07.120
So for example, I'll give you one example, this AI Feynman
link |
00:16:09.800
project that we just published, right?
link |
00:16:11.560
So we took the 100 most famous or complicated equations
link |
00:16:18.080
from one of my favorite physics textbooks,
link |
00:16:20.880
in fact, the one that got me into physics
link |
00:16:22.560
in the first place, the Feynman lectures on physics.
link |
00:16:25.760
And so you have a formula.
link |
00:16:28.520
Maybe it has what goes into the formula
link |
00:16:31.680
as six different variables, and then what comes out as one.
link |
00:16:35.960
So then you can make a giant Excel spreadsheet
link |
00:16:38.000
with seven columns.
link |
00:16:39.600
You put in just random numbers for the six columns
link |
00:16:41.680
for those six input variables, and then you
link |
00:16:43.880
calculate with a formula the seventh column, the output.
link |
00:16:46.880
So maybe it's like the force equals in the last column
link |
00:16:50.440
some function of the other.
link |
00:16:51.720
And now the task is, OK, if I don't tell you
link |
00:16:53.840
what the formula was, can you figure that out
link |
00:16:57.320
from looking at my spreadsheet I gave you?
link |
00:17:00.080
This problem is called symbolic regression.
link |
00:17:04.400
If I tell you that the formula is
link |
00:17:05.800
what we call a linear formula, so it's just
link |
00:17:08.160
that the output is sum of all the things, input, the times,
link |
00:17:14.760
some constants, that's the famous easy problem
link |
00:17:17.440
we can solve.
link |
00:17:18.680
We do it all the time in science and engineering.
link |
00:17:21.360
But the general one, if it's more complicated functions
link |
00:17:24.480
with logarithms or cosines or other math,
link |
00:17:27.920
it's a very, very hard one and probably impossible
link |
00:17:30.560
to do fast in general, just because the number of formulas
link |
00:17:34.560
with n symbols just grows exponentially,
link |
00:17:37.360
just like the number of passwords
link |
00:17:38.760
you can make grow dramatically with length.
link |
00:17:43.320
But we had this idea that if you first
link |
00:17:46.160
have a neural network that can actually approximate
link |
00:17:48.480
the formula, you just trained it,
link |
00:17:49.880
even if you don't understand how it works,
link |
00:17:51.960
that can be the first step towards actually understanding
link |
00:17:56.560
how it works.
link |
00:17:58.280
So that's what we do first.
link |
00:18:00.600
And then we study that neural network now
link |
00:18:03.240
and put in all sorts of other data
link |
00:18:04.880
that wasn't in the original training data
link |
00:18:06.720
and use that to discover simplifying
link |
00:18:09.400
properties of the formula.
link |
00:18:11.460
And that lets us break it apart, often
link |
00:18:13.160
into many simpler pieces in a kind of divide
link |
00:18:15.560
and conquer approach.
link |
00:18:17.480
So we were able to solve all of those 100 formulas,
link |
00:18:20.120
discover them automatically, plus a whole bunch
link |
00:18:22.160
of other ones.
link |
00:18:22.720
And it's actually kind of humbling
link |
00:18:26.320
to see that this code, which anyone who wants now
link |
00:18:29.480
is listening to this, can type pip install AI Feynman
link |
00:18:33.200
on the computer and run it.
link |
00:18:34.560
It can actually do what Johannes Kepler spent four years doing
link |
00:18:38.360
when he stared at Mars data until he was like,
link |
00:18:40.800
finally, Eureka, this is an ellipse.
link |
00:18:44.520
This will do it automatically for you in one hour.
link |
00:18:46.960
Or Max Planck, he was looking at how much radiation comes out
link |
00:18:51.600
from different wavelengths from a hot object
link |
00:18:54.160
and discovered the famous blackbody formula.
link |
00:18:57.400
This discovers it automatically.
link |
00:19:00.400
I'm actually excited about seeing
link |
00:19:05.120
if we can discover not just old formulas again,
link |
00:19:08.640
but new formulas that no one has seen before.
link |
00:19:12.000
I do like this process of using kind of a neural network
link |
00:19:14.680
to find some basic insights and then dissecting
link |
00:19:18.440
the neural network to then gain the final.
link |
00:19:21.680
So in that way, you've forcing the explainability issue,
link |
00:19:30.680
really trying to analyze the neural network for the things
link |
00:19:34.880
it knows in order to come up with the final beautiful,
link |
00:19:38.360
simple theory underlying the initial system
link |
00:19:42.240
that you were looking at.
link |
00:19:43.080
I love that.
link |
00:19:44.280
And the reason I'm so optimistic that it
link |
00:19:47.440
can be generalized to so much more
link |
00:19:49.040
is because that's exactly what we do as human scientists.
link |
00:19:53.480
Think of Galileo, whom we mentioned, right?
link |
00:19:55.680
I bet when he was a little kid, if his dad threw him an apple,
link |
00:19:58.760
he would catch it.
link |
00:20:01.080
Why?
link |
00:20:01.560
Because he had a neural network in his brain
link |
00:20:04.480
that he had trained to predict the parabolic orbit of apples
link |
00:20:07.960
that are thrown under gravity.
link |
00:20:09.960
If you throw a tennis ball to a dog,
link |
00:20:12.000
it also has this same ability of deep learning
link |
00:20:15.360
to figure out how the ball is going to move and catch it.
link |
00:20:18.160
But Galileo went one step further when he got older.
link |
00:20:21.960
He went back and was like, wait a minute.
link |
00:20:26.040
I can write down a formula for this.
link |
00:20:27.880
Y equals x squared, a parabola.
link |
00:20:31.560
And he helped revolutionize physics as we know it, right?
link |
00:20:36.520
So there was a basic neural network
link |
00:20:38.200
in there from childhood that captured the experiences
link |
00:20:43.360
of observing different kinds of trajectories.
link |
00:20:46.440
And then he was able to go back in
link |
00:20:48.240
with another extra little neural network
link |
00:20:51.000
and analyze all those experiences and be like,
link |
00:20:53.480
wait a minute.
link |
00:20:54.600
There's a deeper rule here.
link |
00:20:56.240
Exactly.
link |
00:20:56.960
He was able to distill out in symbolic form
link |
00:21:00.720
what that complicated black box neural network was doing.
link |
00:21:03.960
Not only did the formula he got ultimately
link |
00:21:07.320
become more accurate, and similarly, this
link |
00:21:09.840
is how Newton got Newton's laws, which
link |
00:21:12.000
is why Elon can send rockets to the space station now, right?
link |
00:21:15.600
So it's not only more accurate, but it's also simpler,
link |
00:21:19.480
much simpler.
link |
00:21:20.120
And it's so simple that we can actually describe it
link |
00:21:22.320
to our friends and each other, right?
link |
00:21:26.080
We've talked about it just in the context of physics now.
link |
00:21:28.800
But hey, isn't this what we're doing when we're
link |
00:21:31.560
talking to each other also?
link |
00:21:33.360
We go around with our neural networks,
link |
00:21:35.440
just like dogs and cats and chipmunks and Blue Jays.
link |
00:21:38.760
And we experience things in the world.
link |
00:21:41.920
But then we humans do this additional step
link |
00:21:43.840
on top of that, where we then distill out
link |
00:21:46.720
certain high level knowledge that we've extracted from this
link |
00:21:50.280
in a way that we can communicate it
link |
00:21:52.240
to each other in a symbolic form in English in this case, right?
link |
00:21:56.600
So if we can do it and we believe
link |
00:21:59.960
that we are information processing entities,
link |
00:22:02.880
then we should be able to make machine learning that
link |
00:22:04.960
does it also.
link |
00:22:07.160
Well, do you think the entire thing could be learning?
link |
00:22:10.200
Because this dissection process, like for AI Feynman,
link |
00:22:14.160
the secondary stage feels like something like reasoning.
link |
00:22:19.240
And the initial step feels more like the more basic kind
link |
00:22:23.400
of differentiable learning.
link |
00:22:25.280
Do you think the whole thing could be differentiable
link |
00:22:27.680
learning?
link |
00:22:28.720
Do you think the whole thing could be basically neural
link |
00:22:31.120
networks on top of each other?
link |
00:22:32.320
It's like turtles all the way down.
link |
00:22:33.800
Could it be neural networks all the way down?
link |
00:22:35.920
I mean, that's a really interesting question.
link |
00:22:37.920
We know that in your case, it is neural networks all the way
link |
00:22:41.040
down because that's all you have in your skull
link |
00:22:42.960
is a bunch of neurons doing their thing, right?
link |
00:22:45.880
But if you ask the question more generally,
link |
00:22:50.320
what algorithms are being used in your brain,
link |
00:22:54.120
I think it's super interesting to compare.
link |
00:22:56.160
I think we've gone a little bit backwards historically
link |
00:22:58.760
because we humans first discovered good old fashioned
link |
00:23:02.800
AI, the logic based AI that we often call GoFi
link |
00:23:06.880
for good old fashioned AI.
link |
00:23:09.080
And then more recently, we did machine learning
link |
00:23:12.600
because it required bigger computers.
link |
00:23:14.160
So we had to discover it later.
link |
00:23:15.960
So we think of machine learning with neural networks
link |
00:23:19.160
as the modern thing and the logic based AI
link |
00:23:21.840
as the old fashioned thing.
link |
00:23:24.280
But if you look at evolution on Earth,
link |
00:23:27.800
it's actually been the other way around.
link |
00:23:29.800
I would say that, for example, an eagle
link |
00:23:34.120
has a better vision system than I have using.
link |
00:23:38.680
And dogs are just as good at casting tennis balls as I am.
link |
00:23:42.360
All this stuff which is done by training in neural network
link |
00:23:45.920
and not interpreting it in words is
link |
00:23:49.920
something so many of our animal friends can do,
link |
00:23:51.880
at least as well as us, right?
link |
00:23:53.680
What is it that we humans can do that the chipmunks
link |
00:23:56.560
and the eagles cannot?
link |
00:23:58.880
It's more to do with this logic based stuff, right,
link |
00:24:01.600
where we can extract out information
link |
00:24:04.840
in symbols, in language, and now even with equations
link |
00:24:10.240
if you're a scientist, right?
link |
00:24:12.160
So basically what happened was first we
link |
00:24:13.920
built these computers that could multiply numbers real fast
link |
00:24:16.880
and manipulate symbols.
link |
00:24:18.080
And we felt they were pretty dumb.
link |
00:24:20.520
And then we made neural networks that
link |
00:24:22.800
can see as well as a cat can and do
link |
00:24:25.280
a lot of this inscrutable black box neural networks.
link |
00:24:30.040
What we humans can do also is put the two together
link |
00:24:33.000
in a useful way.
link |
00:24:34.040
Yes, in our own brain.
link |
00:24:36.120
Yes, in our own brain.
link |
00:24:37.360
So if we ever want to get artificial general intelligence
link |
00:24:40.920
that can do all jobs as well as humans can, right,
link |
00:24:45.160
then that's what's going to be required
link |
00:24:47.040
to be able to combine the neural networks with symbolic,
link |
00:24:53.120
combine the old AI with the new AI in a good way.
link |
00:24:55.200
We do it in our brains.
link |
00:24:57.200
And there seems to be basically two strategies
link |
00:24:59.760
I see in industry now.
link |
00:25:01.000
One scares the heebie jeebies out of me,
link |
00:25:03.600
and the other one I find much more encouraging.
link |
00:25:05.840
OK, which one?
link |
00:25:07.080
Can we break them apart?
link |
00:25:08.320
Which of the two?
link |
00:25:09.600
The one that scares the heebie jeebies out of me
link |
00:25:11.640
is this attitude that we're just going
link |
00:25:12.880
to make ever bigger systems that we still
link |
00:25:14.720
don't understand until they can be as smart as humans.
link |
00:25:19.280
What could possibly go wrong?
link |
00:25:22.200
I think it's just such a reckless thing to do.
link |
00:25:24.200
And unfortunately, if we actually
link |
00:25:27.000
succeed as a species to build artificial general intelligence,
link |
00:25:30.120
then we still have no clue how it works.
link |
00:25:31.840
I think at least 50% chance we're
link |
00:25:35.440
going to be extinct before too long.
link |
00:25:37.040
It's just going to be an utter epic own goal.
link |
00:25:40.480
So it's that 44 minute losing money problem or the paper clip
link |
00:25:46.600
problem where we don't understand how it works,
link |
00:25:49.480
and it just in a matter of seconds
link |
00:25:51.280
runs away in some kind of direction
link |
00:25:52.760
that's going to be very problematic.
link |
00:25:54.440
Even long before, you have to worry about the machines
link |
00:25:57.640
themselves somehow deciding to do things.
link |
00:26:01.400
And to us, we have to worry about people using machines
link |
00:26:06.840
that are short of AGI and power to do bad things.
link |
00:26:09.840
I mean, just take a moment.
link |
00:26:13.080
And if anyone is not worried particularly about advanced AI,
link |
00:26:18.040
just take 10 seconds and just think
link |
00:26:20.800
about your least favorite leader on the planet right now.
link |
00:26:23.720
Don't tell me who it is.
link |
00:26:25.120
I want to keep this apolitical.
link |
00:26:26.760
But just see the face in front of you,
link |
00:26:28.840
that person, for 10 seconds.
link |
00:26:30.480
Now imagine that that person has this incredibly powerful AI
link |
00:26:35.280
under their control and can use it
link |
00:26:37.120
to impose their will on the whole planet.
link |
00:26:38.760
How does that make you feel?
link |
00:26:42.840
Yeah.
link |
00:26:44.280
So can we break that apart just briefly?
link |
00:26:49.480
For the 50% chance that we'll run
link |
00:26:51.720
to trouble with this approach, do you
link |
00:26:53.880
see the bigger worry in that leader or humans
link |
00:26:58.040
using the system to do damage?
link |
00:27:00.600
Or are you more worried, and I think I'm in this camp,
link |
00:27:05.360
more worried about accidental, unintentional destruction
link |
00:27:09.800
of everything?
link |
00:27:10.840
So humans trying to do good, and in a way
link |
00:27:14.960
where everyone agrees it's kind of good,
link |
00:27:17.400
it's just they're trying to do good without understanding.
link |
00:27:20.040
Because I think every evil leader in history
link |
00:27:22.480
thought they're, to some degree, thought
link |
00:27:24.560
they're trying to do good.
link |
00:27:25.600
Oh, yeah.
link |
00:27:25.880
I'm sure Hitler thought he was doing good.
link |
00:27:28.080
Yeah.
link |
00:27:29.480
I've been reading a lot about Stalin.
link |
00:27:31.120
I'm sure Stalin is from, he legitimately
link |
00:27:34.240
thought that communism was good for the world,
link |
00:27:36.560
and that he was doing good.
link |
00:27:37.760
I think Mao Zedong thought what he was doing with the Great
link |
00:27:39.960
Leap Forward was good too.
link |
00:27:41.200
Yeah.
link |
00:27:42.880
I'm actually concerned about both of those.
link |
00:27:45.560
Before, I promised to answer this in detail,
link |
00:27:48.440
but before we do that, let me finish
link |
00:27:50.320
answering the first question.
link |
00:27:51.240
Because I told you that there were two different routes we
link |
00:27:53.520
could get to artificial general intelligence,
link |
00:27:55.400
and one scares the hell out of me,
link |
00:27:57.240
which is this one where we build something,
link |
00:27:59.320
we just say bigger neural networks, ever more hardware,
link |
00:28:02.040
and just train the heck out of more data,
link |
00:28:03.760
and poof, now it's very powerful.
link |
00:28:07.240
That, I think, is the most unsafe and reckless approach.
link |
00:28:11.800
The alternative to that is the intelligible intelligence
link |
00:28:16.480
approach instead, where we say neural networks is just
link |
00:28:22.840
a tool for the first step to get the intuition,
link |
00:28:27.000
but then we're going to spend also
link |
00:28:29.120
serious resources on other AI techniques
link |
00:28:33.280
for demystifying this black box and figuring out
link |
00:28:35.960
what it's actually doing so we can convert it
link |
00:28:38.680
into something that's equally intelligent,
link |
00:28:41.040
but that we actually understand what it's doing.
link |
00:28:44.040
Maybe we can even prove theorems about it,
link |
00:28:45.960
that this car here will never be hacked when it's driving,
link |
00:28:50.120
because here is the proof.
link |
00:28:53.800
There is a whole science of this.
link |
00:28:55.160
It doesn't work for neural networks
link |
00:28:57.040
that are big black boxes, but it works well
link |
00:28:58.800
and works with certain other kinds of codes, right?
link |
00:29:02.880
That approach, I think, is much more promising.
link |
00:29:05.160
That's exactly why I'm working on it, frankly,
link |
00:29:07.160
not just because I think it's cool for science,
link |
00:29:09.400
but because I think the more we understand these systems,
link |
00:29:14.080
the better the chances that we can
link |
00:29:16.160
make them do the things that are good for us
link |
00:29:18.400
that are actually intended, not unintended.
link |
00:29:21.520
So you think it's possible to prove things
link |
00:29:24.280
about something as complicated as a neural network?
link |
00:29:27.360
That's the hope?
link |
00:29:28.440
Well, ideally, there's no reason it
link |
00:29:30.840
has to be a neural network in the end either, right?
link |
00:29:34.320
We discovered Newton's laws of gravity
link |
00:29:36.480
with neural network in Newton's head.
link |
00:29:40.040
But that's not the way it's programmed into the navigation
link |
00:29:44.080
system of Elon Musk's rocket anymore.
link |
00:29:46.600
It's written in C++, or I don't know
link |
00:29:49.200
what language he uses exactly.
link |
00:29:51.360
And then there are software tools called symbolic
link |
00:29:53.400
verification.
link |
00:29:54.640
DARPA and the US military has done a lot of really great
link |
00:29:59.080
research on this, because they really
link |
00:30:01.160
want to understand that when they build weapon systems,
link |
00:30:03.760
they don't just go fire at random or malfunction, right?
link |
00:30:07.480
And there is even a whole operating system
link |
00:30:10.720
called Cell 3 that's been developed by a DARPA grant,
link |
00:30:12.920
where you can actually mathematically prove
link |
00:30:16.160
that this thing can never be hacked.
link |
00:30:18.800
Wow.
link |
00:30:20.360
One day, I hope that will be something
link |
00:30:22.280
you can say about the OS that's running on our laptops too.
link |
00:30:25.120
As you know, we're not there.
link |
00:30:27.040
But I think we should be ambitious, frankly.
link |
00:30:30.080
And if we can use machine learning
link |
00:30:34.120
to help do the proofs and so on as well,
link |
00:30:36.320
then it's much easier to verify that a proof is correct
link |
00:30:40.040
than to come up with a proof in the first place.
link |
00:30:42.960
That's really the core idea here.
link |
00:30:45.000
If someone comes on your podcast and says
link |
00:30:47.480
they proved the Riemann hypothesis
link |
00:30:49.760
or some sensational new theorem, it's
link |
00:30:55.480
much easier for someone else, take some smart grad,
link |
00:30:58.640
math grad students to check, oh, there's an error here
link |
00:31:01.000
on equation five, or this really checks out,
link |
00:31:04.000
than it was to discover the proof.
link |
00:31:07.080
Yeah, although some of those proofs are pretty complicated.
link |
00:31:09.000
But yes, it's still nevertheless much easier
link |
00:31:11.080
to verify the proof.
link |
00:31:12.880
I love the optimism.
link |
00:31:14.680
We kind of, even with the security of systems,
link |
00:31:17.480
there's a kind of cynicism that pervades people
link |
00:31:21.760
who think about this, which is like, oh, it's hopeless.
link |
00:31:24.920
I mean, in the same sense, exactly like you're saying
link |
00:31:27.080
when you own networks, oh, it's hopeless to understand
link |
00:31:29.000
what's happening.
link |
00:31:30.440
With security, people are just like, well,
link |
00:31:32.560
it's always going, there's always going to be
link |
00:31:36.240
attack vectors, like ways to attack the system.
link |
00:31:40.800
But you're right, we're just very new
link |
00:31:42.200
with these computational systems.
link |
00:31:44.080
We're new with these intelligent systems.
link |
00:31:46.400
And it's not out of the realm of possibly,
link |
00:31:49.560
just like people that understand the movement
link |
00:31:51.840
of the stars and the planets and so on.
link |
00:31:54.600
It's entirely possible that within, hopefully soon,
link |
00:31:58.320
but it could be within 100 years,
link |
00:32:00.360
we start to have an obvious laws of gravity
link |
00:32:03.600
about intelligence and God forbid about consciousness too.
link |
00:32:09.280
That one is...
link |
00:32:10.960
Agreed.
link |
00:32:12.320
I think, of course, if you're selling computers
link |
00:32:15.240
that get hacked a lot, that's in your interest
link |
00:32:16.720
as a company that people think it's impossible
link |
00:32:18.640
to make it safe, but he's going to get the idea
link |
00:32:20.640
of suing you.
link |
00:32:21.480
I want to really inject optimism here.
link |
00:32:24.840
It's absolutely possible to do much better
link |
00:32:29.480
than we're doing now.
link |
00:32:30.320
And your laptop does so much stuff.
link |
00:32:34.840
You don't need the music player to be super safe
link |
00:32:37.960
in your future self driving car, right?
link |
00:32:42.120
If someone hacks it and starts playing music
link |
00:32:43.840
you don't like, the world won't end.
link |
00:32:47.880
But what you can do is you can break out
link |
00:32:49.560
and say that your drive computer that controls your safety
link |
00:32:53.080
must be completely physically decoupled entirely
link |
00:32:55.920
from the entertainment system.
link |
00:32:57.600
And it must physically be such that it can't take on
link |
00:33:01.080
over the air updates while you're driving.
link |
00:33:03.040
And it can have ultimately some operating system on it
link |
00:33:09.920
which is symbolically verified and proven
link |
00:33:13.320
that it's always going to do what it's supposed to do, right?
link |
00:33:17.760
We can basically have, and companies should take
link |
00:33:19.960
that attitude too.
link |
00:33:20.800
They should look at everything they do and say
link |
00:33:22.440
what are the few systems in our company
link |
00:33:25.840
that threaten the whole life of the company
link |
00:33:27.400
if they get hacked and have the highest standards for them.
link |
00:33:31.800
And then they can save money by going for the el cheapo
link |
00:33:34.560
poorly understood stuff for the rest.
link |
00:33:36.920
This is very feasible, I think.
link |
00:33:38.920
And coming back to the bigger question
link |
00:33:41.720
that you worried about that there'll be unintentional
link |
00:33:45.000
failures, I think there are two quite separate risks here.
link |
00:33:47.720
Right?
link |
00:33:48.560
We talked a lot about one of them
link |
00:33:49.600
which is that the goals are noble of the human.
link |
00:33:52.640
The human says, I want this airplane to not crash
link |
00:33:56.920
because this is not Muhammad Atta
link |
00:33:58.640
now flying the airplane, right?
link |
00:34:00.480
And now there's this technical challenge
link |
00:34:03.240
of making sure that the autopilot is actually
link |
00:34:05.960
gonna behave as the pilot wants.
link |
00:34:11.000
If you set that aside, there's also the separate question.
link |
00:34:13.360
How do you make sure that the goals of the pilot
link |
00:34:17.440
are actually aligned with the goals of the passenger?
link |
00:34:19.680
How do you make sure very much more broadly
link |
00:34:22.480
that if we can all agree as a species
link |
00:34:24.640
that we would like things to kind of go well
link |
00:34:26.200
for humanity as a whole, that the goals are aligned here.
link |
00:34:30.320
The alignment problem.
link |
00:34:31.560
And yeah, there's been a lot of progress
link |
00:34:36.000
in the sense that there's suddenly huge amounts
link |
00:34:39.880
of research going on on it about it.
link |
00:34:42.040
I'm very grateful to Elon Musk
link |
00:34:43.400
for giving us that money five years ago
link |
00:34:44.960
so we could launch the first research program
link |
00:34:46.680
on technical AI safety and alignment.
link |
00:34:49.480
There's a lot of stuff happening.
link |
00:34:51.280
But I think we need to do more than just make sure
link |
00:34:54.920
little machines do always what their owners do.
link |
00:34:58.200
That wouldn't have prevented September 11th
link |
00:35:00.240
if Muhammad Atta said, okay, autopilot,
link |
00:35:03.040
please fly into World Trade Center.
link |
00:35:06.720
And it's like, okay.
link |
00:35:08.960
That even happened in a different situation.
link |
00:35:11.840
There was this depressed pilot named Andreas Lubitz, right?
link |
00:35:15.680
Who told his German wings passenger jet
link |
00:35:17.640
to fly into the Alps.
link |
00:35:19.040
He just told the computer to change the altitude
link |
00:35:21.640
to a hundred meters or something like that.
link |
00:35:23.280
And you know what the computer said?
link |
00:35:25.360
Okay.
link |
00:35:26.600
And it had the freaking topographical map of the Alps
link |
00:35:29.560
in there, it had GPS, everything.
link |
00:35:31.440
No one had bothered teaching it
link |
00:35:33.120
even the basic kindergarten ethics of like,
link |
00:35:35.600
no, we never want airplanes to fly into mountains
link |
00:35:39.600
under any circumstances.
link |
00:35:41.040
And so we have to think beyond just the technical issues
link |
00:35:48.520
and think about how do we align in general incentives
link |
00:35:51.120
on this planet for the greater good?
link |
00:35:53.760
So starting with simple stuff like that,
link |
00:35:55.520
every airplane that has a computer in it
link |
00:35:58.160
should be taught whatever kindergarten ethics
link |
00:36:00.840
that's smart enough to understand.
link |
00:36:02.280
Like, no, don't fly into fixed objects
link |
00:36:05.040
if the pilot tells you to do so.
link |
00:36:07.280
Then go on autopilot mode.
link |
00:36:10.000
Send an email to the cops and land at the latest airport,
link |
00:36:13.480
nearest airport, you know.
link |
00:36:14.840
Any car with a forward facing camera
link |
00:36:18.240
should just be programmed by the manufacturer
link |
00:36:20.720
so that it will never accelerate into a human ever.
link |
00:36:24.760
That would avoid things like the NIS attack
link |
00:36:28.760
and many horrible terrorist vehicle attacks
link |
00:36:31.120
where they deliberately did that, right?
link |
00:36:33.720
This was not some sort of thing,
link |
00:36:35.160
oh, you know, US and China, different views on,
link |
00:36:38.400
no, there was not a single car manufacturer
link |
00:36:41.880
in the world, right, who wanted the cars to do this.
link |
00:36:44.080
They just hadn't thought to do the alignment.
link |
00:36:45.920
And if you look at more broadly problems
link |
00:36:48.520
that happen on this planet,
link |
00:36:51.280
the vast majority have to do a poor alignment.
link |
00:36:53.840
I mean, think about, let's go back really big
link |
00:36:57.160
because I know you're so good at that.
link |
00:36:59.080
Let's go big, yeah.
link |
00:36:59.920
Yeah, so long ago in evolution, we had these genes.
link |
00:37:03.840
And they wanted to make copies of themselves.
link |
00:37:06.400
That's really all they cared about.
link |
00:37:07.640
So some genes said, hey, I'm gonna build a brain
link |
00:37:13.160
on this body I'm in so that I can get better
link |
00:37:15.880
at making copies of myself.
link |
00:37:17.280
And then they decided for their benefit
link |
00:37:20.240
to get copied more, to align your brain's incentives
link |
00:37:23.320
with their incentives.
link |
00:37:24.560
So it didn't want you to starve to death.
link |
00:37:29.080
So it gave you an incentive to eat
link |
00:37:31.520
and it wanted you to make copies of the genes.
link |
00:37:35.080
So it gave you incentive to fall in love
link |
00:37:37.680
and do all sorts of naughty things
link |
00:37:40.960
to make copies of itself, right?
link |
00:37:44.120
So that was successful value alignment done on the genes.
link |
00:37:47.720
They created something more intelligent than themselves,
link |
00:37:50.440
but they made sure to try to align the values.
link |
00:37:52.920
But then something went a little bit wrong
link |
00:37:55.800
against the idea of what the genes wanted
link |
00:37:58.400
because a lot of humans discovered,
link |
00:38:00.360
hey, you know, yeah, we really like this business
link |
00:38:03.280
about sex that the genes have made us enjoy,
link |
00:38:06.640
but we don't wanna have babies right now.
link |
00:38:09.440
So we're gonna hack the genes and use birth control.
link |
00:38:13.800
And I really feel like drinking a Coca Cola right now,
link |
00:38:18.640
but I don't wanna get a potbelly,
link |
00:38:20.080
so I'm gonna drink Diet Coke.
link |
00:38:21.960
We have all these things we've figured out
link |
00:38:24.600
because we're smarter than the genes,
link |
00:38:26.400
how we can actually subvert their intentions.
link |
00:38:29.040
So it's not surprising that we humans now,
link |
00:38:33.440
when we are in the role of these genes,
link |
00:38:34.800
creating other nonhuman entities with a lot of power,
link |
00:38:37.640
have to face the same exact challenge.
link |
00:38:39.400
How do we make other powerful entities
link |
00:38:41.720
have incentives that are aligned with ours?
link |
00:38:45.280
And so they won't hack them.
link |
00:38:47.000
Corporations, for example, right?
link |
00:38:48.720
We humans decided to create corporations
link |
00:38:51.280
because it can benefit us greatly.
link |
00:38:53.440
Now all of a sudden there's a supermarket.
link |
00:38:55.120
I can go buy food there.
link |
00:38:56.240
I don't have to hunt.
link |
00:38:57.200
Awesome, and then to make sure that this corporation
link |
00:39:02.880
would do things that were good for us and not bad for us,
link |
00:39:05.960
we created institutions to keep them in check.
link |
00:39:08.280
Like if the local supermarket sells poisonous food,
link |
00:39:12.520
then the owners of the supermarket
link |
00:39:17.920
have to spend some years reflecting behind bars, right?
link |
00:39:22.160
So we created incentives to align them.
link |
00:39:25.720
But of course, just like we were able to see
link |
00:39:27.480
through this thing and you develop birth control,
link |
00:39:30.640
if you're a powerful corporation,
link |
00:39:31.840
you also have an incentive to try to hack the institutions
link |
00:39:35.080
that are supposed to govern you.
link |
00:39:36.320
Because you ultimately, as a corporation,
link |
00:39:38.160
have an incentive to maximize your profit.
link |
00:39:40.920
Just like you have an incentive
link |
00:39:42.080
to maximize the enjoyment your brain has,
link |
00:39:44.160
not for your genes.
link |
00:39:46.000
So if they can figure out a way of bribing regulators,
link |
00:39:50.480
then they're gonna do that.
link |
00:39:52.400
In the US, we kind of caught onto that
link |
00:39:54.440
and made laws against corruption and bribery.
link |
00:39:58.560
Then in the late 1800s, Teddy Roosevelt realized that,
link |
00:40:03.760
no, we were still being kind of hacked
link |
00:40:05.360
because the Massachusetts Railroad companies
link |
00:40:07.280
had like a bigger budget than the state of Massachusetts
link |
00:40:10.120
and they were doing a lot of very corrupt stuff.
link |
00:40:13.600
So he did the whole trust busting thing
link |
00:40:15.480
to try to align these other nonhuman entities,
link |
00:40:18.440
the companies, again,
link |
00:40:19.440
more with the incentives of Americans as a whole.
link |
00:40:23.040
It's not surprising, though,
link |
00:40:24.080
that this is a battle you have to keep fighting.
link |
00:40:26.160
Now we have even larger companies than we ever had before.
link |
00:40:30.560
And of course, they're gonna try to, again,
link |
00:40:34.320
subvert the institutions.
link |
00:40:37.800
Not because, I think people make a mistake
link |
00:40:41.040
of getting all too,
link |
00:40:44.280
thinking about things in terms of good and evil.
link |
00:40:46.960
Like arguing about whether corporations are good or evil,
link |
00:40:50.360
or whether robots are good or evil.
link |
00:40:53.080
A robot isn't good or evil, it's a tool.
link |
00:40:57.040
And you can use it for great things
link |
00:40:58.400
like robotic surgery or for bad things.
link |
00:41:01.080
And a corporation also is a tool, of course.
link |
00:41:04.120
And if you have good incentives to the corporation,
link |
00:41:06.480
it'll do great things,
link |
00:41:07.520
like start a hospital or a grocery store.
link |
00:41:10.000
If you have any bad incentives,
link |
00:41:12.680
then it's gonna start maybe marketing addictive drugs
link |
00:41:15.800
to people and you'll have an opioid epidemic, right?
link |
00:41:18.600
It's all about,
link |
00:41:21.440
we should not make the mistake of getting into
link |
00:41:23.480
some sort of fairytale, good, evil thing
link |
00:41:25.640
about corporations or robots.
link |
00:41:27.920
We should focus on putting the right incentives in place.
link |
00:41:30.800
My optimistic vision is that if we can do that,
link |
00:41:34.280
then we can really get good things.
link |
00:41:35.840
We're not doing so great with that right now,
link |
00:41:38.000
either on AI, I think,
link |
00:41:39.240
or on other intelligent nonhuman entities,
link |
00:41:42.680
like big companies, right?
link |
00:41:43.920
We just have a new second generation of AI
link |
00:41:47.440
and a secretary of defense who's gonna start up now
link |
00:41:51.160
in the Biden administration,
link |
00:41:53.640
who was an active member of the board of Raytheon,
link |
00:41:58.120
for example.
link |
00:41:59.240
So, I have nothing against Raytheon.
link |
00:42:04.720
I'm not a pacifist,
link |
00:42:05.680
but there's an obvious conflict of interest
link |
00:42:08.560
if someone is in the job where they decide
link |
00:42:12.360
who they're gonna contract with.
link |
00:42:14.240
And I think somehow we have,
link |
00:42:16.680
maybe we need another Teddy Roosevelt to come along again
link |
00:42:19.520
and say, hey, you know,
link |
00:42:20.640
we want what's good for all Americans,
link |
00:42:23.480
and we need to go do some serious realigning again
link |
00:42:26.600
of the incentives that we're giving to these big companies.
link |
00:42:30.760
And then we're gonna be better off.
link |
00:42:33.880
It seems that naturally with human beings,
link |
00:42:35.800
just like you beautifully described the history
link |
00:42:37.720
of this whole thing,
link |
00:42:38.880
of it all started with the genes
link |
00:42:40.760
and they're probably pretty upset
link |
00:42:42.680
by all the unintended consequences that happened since.
link |
00:42:45.600
But it seems that it kind of works out,
link |
00:42:48.680
like it's in this collective intelligence
link |
00:42:51.120
that emerges at the different levels.
link |
00:42:53.480
It seems to find sometimes last minute
link |
00:42:56.920
a way to realign the values or keep the values aligned.
link |
00:43:00.920
It's almost, it finds a way,
link |
00:43:03.800
like different leaders, different humans pop up
link |
00:43:07.560
all over the place that reset the system.
link |
00:43:10.680
Do you want, I mean, do you have an explanation why that is?
link |
00:43:15.240
Or is that just survivor bias?
link |
00:43:17.240
And also is that different,
link |
00:43:19.600
somehow fundamentally different than with AI systems
link |
00:43:23.120
where you're no longer dealing with something
link |
00:43:26.440
that was a direct, maybe companies are the same,
link |
00:43:30.200
a direct byproduct of the evolutionary process?
link |
00:43:33.360
I think there is one thing which has changed.
link |
00:43:36.200
That's why I'm not all optimistic.
link |
00:43:40.280
That's why I think there's about a 50% chance
link |
00:43:42.280
if we take the dumb route with artificial intelligence
link |
00:43:46.120
that humanity will be extinct in this century.
link |
00:43:51.680
First, just the big picture.
link |
00:43:53.320
Yeah, companies need to have the right incentives.
link |
00:43:57.880
Even governments, right?
link |
00:43:59.000
We used to have governments,
link |
00:44:02.120
usually there were just some king,
link |
00:44:04.200
who was the king because his dad was the king.
link |
00:44:07.160
And then there were some benefits
link |
00:44:10.600
of having this powerful kingdom or empire of any sort
link |
00:44:15.280
because then it could prevent a lot of local squabbles.
link |
00:44:17.960
So at least everybody in that region
link |
00:44:19.360
would stop warring against each other.
link |
00:44:20.800
And their incentives of different cities in the kingdom
link |
00:44:24.200
became more aligned, right?
link |
00:44:25.160
That was the whole selling point.
link |
00:44:27.200
Harare, Noel Harare has a beautiful piece
link |
00:44:31.520
on how empires were collaboration enablers.
link |
00:44:35.320
And then we also, Harare says,
link |
00:44:36.760
invented money for that reason
link |
00:44:38.280
so we could have better alignment
link |
00:44:40.640
and we could do trade even with people we didn't know.
link |
00:44:44.160
So this sort of stuff has been playing out
link |
00:44:45.840
since time immemorial, right?
link |
00:44:47.880
What's changed is that it happens on ever larger scales,
link |
00:44:51.520
right?
link |
00:44:52.360
The technology keeps getting better
link |
00:44:53.480
because science gets better.
link |
00:44:54.760
So now we can communicate over larger distances,
link |
00:44:57.600
transport things fast over larger distances.
link |
00:44:59.840
And so the entities get ever bigger,
link |
00:45:02.960
but our planet is not getting bigger anymore.
link |
00:45:05.480
So in the past, you could have one experiment
link |
00:45:08.120
that just totally screwed up like Easter Island,
link |
00:45:11.920
where they actually managed to have such poor alignment
link |
00:45:15.160
that when they went extinct, people there,
link |
00:45:17.600
there was no one else to come back and replace them, right?
link |
00:45:21.520
If Elon Musk doesn't get us to Mars
link |
00:45:24.000
and then we go extinct on a global scale,
link |
00:45:27.680
then we're not coming back.
link |
00:45:28.920
That's the fundamental difference.
link |
00:45:31.480
And that's a mistake we don't make for that reason.
link |
00:45:35.800
In the past, of course, history is full of fiascos, right?
link |
00:45:39.800
But it was never the whole planet.
link |
00:45:42.160
And then, okay, now there's this nice uninhabited land here.
link |
00:45:45.960
Some other people could move in and organize things better.
link |
00:45:49.400
This is different.
link |
00:45:50.720
The second thing, which is also different
link |
00:45:52.680
is that technology gives us so much more empowerment, right?
link |
00:45:58.200
Both to do good things and also to screw up.
link |
00:46:00.520
In the stone age, even if you had someone
link |
00:46:02.920
whose goals were really poorly aligned,
link |
00:46:04.760
like maybe he was really pissed off
link |
00:46:06.680
because his stone age girlfriend dumped him
link |
00:46:08.760
and he just wanted to,
link |
00:46:09.920
if he wanted to kill as many people as he could,
link |
00:46:12.640
how many could he really take out with a rock and a stick
link |
00:46:15.160
before he was overpowered, right?
link |
00:46:17.200
Just handful, right?
link |
00:46:18.920
Now, with today's technology,
link |
00:46:23.760
if we have an accidental nuclear war
link |
00:46:25.640
between Russia and the US,
link |
00:46:27.880
which we almost have about a dozen times,
link |
00:46:31.080
and then we have a nuclear winter,
link |
00:46:32.280
it could take out seven billion people
link |
00:46:34.760
or six billion people, we don't know.
link |
00:46:37.280
So the scale of the damage is bigger that we can do.
link |
00:46:40.440
And there's obviously no law of physics
link |
00:46:45.520
that says that technology will never get powerful enough
link |
00:46:48.080
that we could wipe out our species entirely.
link |
00:46:51.720
That would just be fantasy to think
link |
00:46:53.640
that science is somehow doomed
link |
00:46:55.080
to not get more powerful than that, right?
link |
00:46:57.240
And it's not at all unfeasible in our lifetime
link |
00:47:00.280
that someone could design a designer pandemic
link |
00:47:03.120
which spreads as easily as COVID,
link |
00:47:04.640
but just basically kills everybody.
link |
00:47:06.880
We already had smallpox.
link |
00:47:08.480
It killed one third of everybody who got it.
link |
00:47:13.000
What do you think of the, here's an intuition,
link |
00:47:15.320
maybe it's completely naive
link |
00:47:16.840
and this optimistic intuition I have,
link |
00:47:18.960
which it seems, and maybe it's a biased experience
link |
00:47:22.880
that I have, but it seems like the most brilliant people
link |
00:47:25.920
I've met in my life all are really like
link |
00:47:31.600
fundamentally good human beings.
link |
00:47:33.680
And not like naive good, like they really wanna do good
link |
00:47:37.440
for the world in a way that, well, maybe is aligned
link |
00:47:39.880
to my sense of what good means.
link |
00:47:41.800
And so I have a sense that the people
link |
00:47:47.840
that will be defining the very cutting edge of technology,
link |
00:47:51.000
there'll be much more of the ones that are doing good
link |
00:47:53.960
versus the ones that are doing evil.
link |
00:47:55.840
So the race, I'm optimistic on the,
link |
00:48:00.160
us always like last minute coming up with a solution.
link |
00:48:03.080
So if there's an engineered pandemic
link |
00:48:06.480
that has the capability to destroy
link |
00:48:09.280
most of the human civilization,
link |
00:48:11.640
it feels like to me either leading up to that before
link |
00:48:15.880
or as it's going on, there will be,
link |
00:48:19.240
we're able to rally the collective genius
link |
00:48:22.520
of the human species.
link |
00:48:23.800
I can tell by your smile that you're
link |
00:48:26.160
at least some percentage doubtful,
link |
00:48:30.080
but could that be a fundamental law of human nature?
link |
00:48:35.000
That evolution only creates, like karma is beneficial,
link |
00:48:40.880
good is beneficial, and therefore we'll be all right.
link |
00:48:44.280
I hope you're right.
link |
00:48:46.960
I would really love it if you're right,
link |
00:48:48.720
if there's some sort of law of nature that says
link |
00:48:51.000
that we always get lucky in the last second
link |
00:48:53.080
with karma, but I prefer not playing it so close
link |
00:49:01.160
and gambling on that.
link |
00:49:03.040
And I think, in fact, I think it can be dangerous
link |
00:49:06.480
to have too strong faith in that
link |
00:49:08.120
because it makes us complacent.
link |
00:49:10.800
Like if someone tells you, you never have to worry
link |
00:49:12.520
about your house burning down,
link |
00:49:13.760
then you're not gonna put in a smoke detector
link |
00:49:15.360
because why would you need to?
link |
00:49:17.000
Even if it's sometimes very simple precautions,
link |
00:49:19.040
we don't take them.
link |
00:49:20.000
If you're like, oh, the government is gonna take care
link |
00:49:22.360
of everything for us, I can always trust my politicians.
link |
00:49:24.760
I can always, we advocate our own responsibility.
link |
00:49:27.520
I think it's a healthier attitude to say,
link |
00:49:29.080
yeah, maybe things will work out.
link |
00:49:30.840
Maybe I'm actually gonna have to myself step up
link |
00:49:33.560
and take responsibility.
link |
00:49:37.160
And the stakes are so huge.
link |
00:49:38.360
I mean, if we do this right, we can develop
link |
00:49:41.840
all this ever more powerful technology
link |
00:49:43.640
and cure all diseases and create a future
link |
00:49:46.360
where humanity is healthy and wealthy
link |
00:49:48.040
for not just the next election cycle,
link |
00:49:50.080
but like billions of years throughout our universe.
link |
00:49:52.960
That's really worth working hard for
link |
00:49:54.760
and not just sitting and hoping
link |
00:49:58.000
for some sort of fairytale karma.
link |
00:49:59.520
Well, I just mean, so you're absolutely right.
link |
00:50:01.600
From the perspective of the individual,
link |
00:50:03.080
like for me, the primary thing should be
link |
00:50:05.600
to take responsibility and to build the solutions
link |
00:50:09.720
that your skillset allows.
link |
00:50:11.320
Yeah, which is a lot.
link |
00:50:12.720
I think we underestimate often very much
link |
00:50:14.560
how much good we can do.
link |
00:50:16.360
If you or anyone listening to this
link |
00:50:19.520
is completely confident that our government
link |
00:50:23.000
would do a perfect job on handling any future crisis
link |
00:50:25.720
with engineered pandemics or future AI,
link |
00:50:29.920
I actually reflect a bit on what actually happened in 2020.
link |
00:50:36.360
Do you feel that the government by and large
link |
00:50:39.680
around the world has handled this flawlessly?
link |
00:50:42.680
That's a really sad and disappointing reality
link |
00:50:45.160
that hopefully is a wake up call for everybody.
link |
00:50:48.720
For the scientists, for the engineers,
link |
00:50:52.280
for the researchers in AI especially,
link |
00:50:54.240
it was disappointing to see how inefficient we were
link |
00:51:01.000
at collecting the right amount of data
link |
00:51:04.120
in a privacy preserving way and spreading that data
link |
00:51:07.080
and utilizing that data to make decisions,
link |
00:51:09.200
all that kind of stuff.
link |
00:51:10.440
Yeah, I think when something bad happens to me,
link |
00:51:13.360
I made myself a promise many years ago
link |
00:51:17.280
that I would not be a whiner.
link |
00:51:21.760
So when something bad happens to me,
link |
00:51:23.680
of course it's a process of disappointment,
link |
00:51:27.280
but then I try to focus on what did I learn from this
link |
00:51:30.520
that can make me a better person in the future.
link |
00:51:32.600
And there's usually something to be learned when I fail.
link |
00:51:35.720
And I think we should all ask ourselves,
link |
00:51:38.200
what can we learn from the pandemic
link |
00:51:41.480
about how we can do better in the future?
link |
00:51:43.400
And you mentioned there a really good lesson.
link |
00:51:46.360
We were not as resilient as we thought we were
link |
00:51:50.480
and we were not as prepared maybe as we wish we were.
link |
00:51:53.960
You can even see very stark contrast around the planet.
link |
00:51:57.280
South Korea, they have over 50 million people.
link |
00:52:01.760
Do you know how many deaths they have from COVID
link |
00:52:03.520
last time I checked?
link |
00:52:05.600
No.
link |
00:52:06.440
It's about 500.
link |
00:52:08.880
Why is that?
link |
00:52:10.280
Well, the short answer is that they had prepared.
link |
00:52:16.760
They were incredibly quick,
link |
00:52:19.200
incredibly quick to get on it
link |
00:52:21.520
with very rapid testing and contact tracing and so on,
link |
00:52:25.520
which is why they never had more cases
link |
00:52:28.080
than they could contract trace effectively, right?
link |
00:52:30.040
They never even had to have the kind of big lockdowns
link |
00:52:32.040
we had in the West.
link |
00:52:33.720
But the deeper answer to,
link |
00:52:36.560
it's not just the Koreans are just somehow better people.
link |
00:52:39.080
The reason I think they were better prepared
link |
00:52:40.800
was because they had already had a pretty bad hit
link |
00:52:45.320
from the SARS pandemic,
link |
00:52:47.560
or which never became a pandemic,
link |
00:52:49.920
something like 17 years ago, I think.
link |
00:52:52.120
So it was kind of fresh memory
link |
00:52:53.400
that we need to be prepared for pandemics.
link |
00:52:56.000
So they were, right?
link |
00:52:59.080
So maybe this is a lesson here
link |
00:53:01.240
for all of us to draw from COVID
link |
00:53:03.280
that rather than just wait for the next pandemic
link |
00:53:06.360
or the next problem with AI getting out of control
link |
00:53:09.840
or anything else,
link |
00:53:11.320
maybe we should just actually set aside
link |
00:53:14.720
a tiny fraction of our GDP
link |
00:53:17.680
to have people very systematically
link |
00:53:19.320
do some horizon scanning and say,
link |
00:53:20.680
okay, what are the things that could go wrong?
link |
00:53:23.320
And let's duke it out and see
link |
00:53:24.600
which are the more likely ones
link |
00:53:25.800
and which are the ones that are actually actionable
link |
00:53:28.760
and then be prepared.
link |
00:53:29.800
So one of the observations as one little ant slash human
link |
00:53:36.560
that I am of disappointment
link |
00:53:38.560
is the political division over information
link |
00:53:44.040
that has been observed, that I observed this year,
link |
00:53:47.440
that it seemed the discussion was less about
link |
00:53:54.040
sort of what happened and understanding
link |
00:53:57.600
what happened deeply and more about
link |
00:54:00.680
there's different truths out there.
link |
00:54:04.080
And it's like an argument,
link |
00:54:05.400
my truth is better than your truth.
link |
00:54:07.640
And it's like red versus blue or different.
link |
00:54:10.840
It was like this ridiculous discourse
link |
00:54:13.280
that doesn't seem to get at any kind of notion of the truth.
link |
00:54:16.520
It's not like some kind of scientific process.
link |
00:54:19.000
Even science got politicized in ways
link |
00:54:21.000
that's very heartbreaking to me.
link |
00:54:24.360
You have an exciting project on the AI front
link |
00:54:28.680
of trying to rethink one of the,
link |
00:54:32.560
you mentioned corporations.
link |
00:54:34.240
There's one of the other collective intelligence systems
link |
00:54:37.360
that have emerged through all of this is social networks.
link |
00:54:40.480
And just the spread, the internet is the spread
link |
00:54:43.600
of information on the internet,
link |
00:54:46.400
our ability to share that information.
link |
00:54:48.320
There's all different kinds of news sources and so on.
link |
00:54:50.640
And so you said like that's from first principles,
link |
00:54:53.200
let's rethink how we think about the news,
link |
00:54:57.320
how we think about information.
link |
00:54:59.080
Can you talk about this amazing effort
link |
00:55:02.480
that you're undertaking?
link |
00:55:03.640
Oh, I'd love to.
link |
00:55:04.560
This has been my big COVID project
link |
00:55:06.400
and nights and weekends on ever since the lockdown.
link |
00:55:11.920
To segue into this actually,
link |
00:55:13.080
let me come back to what you said earlier
link |
00:55:14.520
that you had this hope that in your experience,
link |
00:55:17.040
people who you felt were very talented
link |
00:55:18.800
were often idealistic and wanted to do good.
link |
00:55:21.240
Frankly, I feel the same about all people by and large,
link |
00:55:25.160
there are always exceptions,
link |
00:55:26.120
but I think the vast majority of everybody,
link |
00:55:28.480
regardless of education and whatnot,
link |
00:55:30.320
really are fundamentally good, right?
link |
00:55:33.280
So how can it be that people still do so much nasty stuff?
link |
00:55:37.920
I think it has everything to do with this,
link |
00:55:40.040
with the information that we're given.
link |
00:55:41.920
Yes.
link |
00:55:42.760
If you go into Sweden 500 years ago
link |
00:55:46.240
and you start telling all the farmers
link |
00:55:47.360
that those Danes in Denmark,
link |
00:55:49.160
they're so terrible people, and we have to invade them
link |
00:55:52.840
because they've done all these terrible things
link |
00:55:55.320
that you can't fact check yourself.
link |
00:55:56.840
A lot of people, Swedes did that, right?
link |
00:55:59.720
And we're seeing so much of this today in the world,
link |
00:56:06.680
both geopolitically, where we are told that China is bad
link |
00:56:11.760
and Russia is bad and Venezuela is bad,
link |
00:56:13.960
and people in those countries are often told
link |
00:56:16.000
that we are bad.
link |
00:56:17.320
And we also see it at a micro level where people are told
link |
00:56:21.840
that, oh, those who voted for the other party are bad people.
link |
00:56:24.640
It's not just an intellectual disagreement,
link |
00:56:26.480
but they're bad people and we're getting ever more divided.
link |
00:56:32.880
So how do you reconcile this with this intrinsic goodness
link |
00:56:39.000
in people?
link |
00:56:39.840
I think it's pretty obvious that it has, again,
link |
00:56:41.640
to do with the information that we're fed and given, right?
link |
00:56:46.280
We evolved to live in small groups
link |
00:56:49.800
where you might know 30 people in total, right?
link |
00:56:52.080
So you then had a system that was quite good
link |
00:56:55.440
for assessing who you could trust and who you could not.
link |
00:56:57.760
And if someone told you that Joe there is a jerk,
link |
00:57:02.840
but you had interacted with him yourself
link |
00:57:05.000
and seen him in action,
link |
00:57:06.400
and you would quickly realize maybe
link |
00:57:08.320
that that's actually not quite accurate, right?
link |
00:57:11.680
But now that the most people on the planet
link |
00:57:13.520
are people we've never met,
link |
00:57:15.280
it's very important that we have a way
link |
00:57:17.200
of trusting the information we're given.
link |
00:57:19.400
And so, okay, so where does the news project come in?
link |
00:57:23.160
Well, throughout history, you can go read Machiavelli,
link |
00:57:26.560
from the 1400s, and you'll see how already then
link |
00:57:28.680
they were busy manipulating people
link |
00:57:30.040
with propaganda and stuff.
link |
00:57:31.640
Propaganda is not new at all.
link |
00:57:35.720
And the incentives to manipulate people
link |
00:57:37.720
is just not new at all.
link |
00:57:40.040
What is it that's new?
link |
00:57:41.240
What's new is machine learning meets propaganda.
link |
00:57:44.680
That's what's new.
link |
00:57:45.760
That's why this has gotten so much worse.
link |
00:57:47.880
Some people like to blame certain individuals,
link |
00:57:50.680
like in my liberal university bubble,
link |
00:57:53.120
many people blame Donald Trump and say it was his fault.
link |
00:57:56.920
I see it differently.
link |
00:57:59.120
I think Donald Trump just had this extreme skill
link |
00:58:03.840
at playing this game in the machine learning algorithm age.
link |
00:58:07.560
A game he couldn't have played 10 years ago.
link |
00:58:09.920
So what's changed?
link |
00:58:10.920
What's changed is, well, Facebook and Google
link |
00:58:13.200
and other companies, and I'm not badmouthing them,
link |
00:58:16.640
I have a lot of friends who work for these companies,
link |
00:58:18.800
good people, they deployed machine learning algorithms
link |
00:58:22.720
just to increase their profit a little bit,
link |
00:58:24.280
to just maximize the time people spent watching ads.
link |
00:58:28.520
And they had totally underestimated
link |
00:58:30.520
how effective they were gonna be.
link |
00:58:32.360
This was, again, the black box, non intelligible intelligence.
link |
00:58:37.560
They just noticed, oh, we're getting more ad revenue.
link |
00:58:39.360
Great.
link |
00:58:40.200
It took a long time until they even realized why and how
link |
00:58:42.400
and how damaging this was for society.
link |
00:58:45.760
Because of course, what the machine learning figured out
link |
00:58:47.960
was that the by far most effective way of gluing you
link |
00:58:52.080
to your little rectangle was to show you things
link |
00:58:55.040
that triggered strong emotions, anger, et cetera, resentment,
link |
00:58:59.800
and if it was true or not, it didn't really matter.
link |
00:59:04.720
It was also easier to find stories that weren't true.
link |
00:59:07.520
If you weren't limited, that's just the limitation,
link |
00:59:09.320
is to show people.
link |
00:59:10.600
That's a very limiting fact.
link |
00:59:12.360
And before long, we got these amazing filter bubbles
link |
00:59:16.960
on a scale we had never seen before.
link |
00:59:18.960
A couple of days to the fact that also the online news media
link |
00:59:24.600
were so effective that they killed a lot of people
link |
00:59:27.560
that were so effective that they killed a lot of print
link |
00:59:30.200
journalism.
link |
00:59:30.800
There's less than half as many journalists
link |
00:59:34.120
now in America, I believe, as there was a generation ago.
link |
00:59:39.640
You just couldn't compete with the online advertising.
link |
00:59:42.800
So all of a sudden, most people are not
link |
00:59:47.240
getting even reading newspapers.
link |
00:59:48.680
They get their news from social media.
link |
00:59:51.320
And most people only get news in their little bubble.
link |
00:59:55.000
So along comes now some people like Donald Trump,
link |
00:59:58.400
who figured out, among the first successful politicians,
link |
01:00:01.560
to figure out how to really play this new game
link |
01:00:04.080
and become very, very influential.
link |
01:00:05.960
But I think Donald Trump was as simple.
link |
01:00:09.600
He took advantage of it.
link |
01:00:11.120
He didn't create the fundamental conditions
link |
01:00:14.520
were created by machine learning taking over the news media.
link |
01:00:19.040
So this is what motivated my little COVID project here.
link |
01:00:22.920
So I said before, machine learning and tech in general
link |
01:00:27.120
is not evil, but it's also not good.
link |
01:00:29.040
It's just a tool that you can use
link |
01:00:31.400
for good things or bad things.
link |
01:00:32.680
And as it happens, machine learning and news
link |
01:00:36.000
was mainly used by the big players, big tech,
link |
01:00:39.680
to manipulate people and to watch as many ads as possible,
link |
01:00:43.240
which had this unintended consequence of really screwing
link |
01:00:45.720
up our democracy and fragmenting it into filter bubbles.
link |
01:00:50.440
So I thought, well, machine learning algorithms
link |
01:00:53.200
are basically free.
link |
01:00:54.400
They can run on your smartphone for free also
link |
01:00:56.200
if someone gives them away to you, right?
link |
01:00:57.840
There's no reason why they only have to help the big guy
link |
01:01:01.840
to manipulate the little guy.
link |
01:01:02.960
They can just as well help the little guy
link |
01:01:05.280
to see through all the manipulation attempts
link |
01:01:07.880
from the big guy.
link |
01:01:08.720
So this project is called,
link |
01:01:10.600
you can go to improvethenews.org.
link |
01:01:12.800
The first thing we've built is this little news aggregator.
link |
01:01:16.600
Looks a bit like Google News,
link |
01:01:17.880
except it has these sliders on it to help you break out
link |
01:01:20.200
of your filter bubble.
link |
01:01:21.760
So if you're reading, you can click, click
link |
01:01:24.440
and go to your favorite topic.
link |
01:01:27.080
And then if you just slide the left, right slider
link |
01:01:31.120
away all the way over to the left.
link |
01:01:32.560
There's two sliders, right?
link |
01:01:33.720
Yeah, there's the one, the most obvious one
link |
01:01:36.280
is the one that has left, right labeled on it.
link |
01:01:38.800
You go to the left, you get one set of articles,
link |
01:01:40.560
you go to the right, you see a very different truth
link |
01:01:43.360
appearing.
link |
01:01:44.200
Oh, that's literally left and right on the political spectrum.
link |
01:01:47.640
On the political spectrum.
link |
01:01:48.480
So if you're reading about immigration, for example,
link |
01:01:52.720
it's very, very noticeable.
link |
01:01:55.560
And I think step one always,
link |
01:01:57.080
if you wanna not get manipulated is just to be able
link |
01:02:00.960
to recognize the techniques people use.
link |
01:02:02.960
So it's very helpful to just see how they spin things
link |
01:02:05.880
on the two sides.
link |
01:02:08.160
I think many people are under the misconception
link |
01:02:11.240
that the main problem is fake news.
link |
01:02:14.080
It's not.
link |
01:02:14.920
I had an amazing team of MIT students
link |
01:02:17.520
where we did an academic project to use machine learning
link |
01:02:20.360
to detect the main kinds of bias over the summer.
link |
01:02:23.080
And yes, of course, sometimes there's fake news
link |
01:02:25.640
where someone just claims something that's false, right?
link |
01:02:30.000
Like, oh, Hillary Clinton just got divorced or something.
link |
01:02:32.920
But what we see much more of is actually just omissions.
link |
01:02:37.800
If you go to, there's some stories which just won't be
link |
01:02:41.920
mentioned by the left or the right, because it doesn't suit
link |
01:02:45.520
their agenda.
link |
01:02:46.360
And then they'll mention other ones very, very, very much.
link |
01:02:49.680
So for example, we've had a number of stories
link |
01:02:54.680
about the Trump family's financial dealings.
link |
01:02:59.600
And then there's been a bunch of stories
link |
01:03:01.560
about the Biden family's, Hunter Biden's financial dealings.
link |
01:03:05.280
Surprise, surprise, they don't get equal coverage
link |
01:03:07.520
on the left and the right.
link |
01:03:08.920
One side loves to cover the Biden, Hunter Biden's stuff,
link |
01:03:13.320
and one side loves to cover the Trump.
link |
01:03:15.360
You can never guess which is which, right?
link |
01:03:17.320
But the great news is if you're a normal American citizen
link |
01:03:21.560
and you dislike corruption in all its forms,
link |
01:03:24.960
then slide, slide, you can just look at both sides
link |
01:03:28.560
and you'll see all those political corruption stories.
link |
01:03:32.520
It's really liberating to just take in the both sides,
link |
01:03:37.520
the spin on both sides.
link |
01:03:39.440
It somehow unlocks your mind to think on your own,
link |
01:03:42.720
to realize that, I don't know, it's the same thing
link |
01:03:47.040
that was useful, right, in the Soviet Union times
link |
01:03:49.840
for when everybody was much more aware
link |
01:03:54.360
that they're surrounded by propaganda, right?
link |
01:03:57.200
That is so interesting what you're saying, actually.
link |
01:04:00.600
So Noam Chomsky, used to be our MIT colleague,
link |
01:04:04.000
once said that propaganda is to democracy
link |
01:04:07.640
what violence is to totalitarianism.
link |
01:04:11.960
And what he means by that is if you have
link |
01:04:15.080
a really totalitarian government,
link |
01:04:16.680
you don't need propaganda.
link |
01:04:19.680
People will do what you want them to do anyway,
link |
01:04:22.880
but out of fear, right?
link |
01:04:24.320
But otherwise, you need propaganda.
link |
01:04:28.080
So I would say actually that the propaganda
link |
01:04:29.880
is much higher quality in democracies,
link |
01:04:32.560
much more believable.
link |
01:04:34.240
And it's really, it's really striking.
link |
01:04:36.960
When I talk to colleagues, science colleagues
link |
01:04:39.520
like from Russia and China and so on,
link |
01:04:42.200
I notice they are actually much more aware
link |
01:04:45.120
of the propaganda in their own media
link |
01:04:47.200
than many of my American colleagues are
link |
01:04:48.840
about the propaganda in Western media.
link |
01:04:51.120
That's brilliant.
link |
01:04:51.960
That means the propaganda in the Western media
link |
01:04:53.880
is just better.
link |
01:04:54.720
Yes.
link |
01:04:55.560
That's so brilliant.
link |
01:04:56.400
Everything's better in the West, even the propaganda.
link |
01:04:58.200
But once you realize that,
link |
01:05:07.360
you realize there's also something very optimistic there
link |
01:05:09.280
that you can do about it, right?
link |
01:05:10.480
Because first of all, omissions,
link |
01:05:14.040
as long as there's no outright censorship,
link |
01:05:16.920
you can just look at both sides
link |
01:05:19.840
and pretty quickly piece together
link |
01:05:22.760
a much more accurate idea of what's actually going on, right?
link |
01:05:26.120
And develop a natural skepticism too.
link |
01:05:28.040
Yeah.
link |
01:05:28.880
Just an analytical scientific mind
link |
01:05:31.600
about the way you're taking the information.
link |
01:05:32.880
Yeah.
link |
01:05:33.720
And I think, I have to say,
link |
01:05:35.480
sometimes I feel that some of us in the academic bubble
link |
01:05:38.480
are too arrogant about this and somehow think,
link |
01:05:41.440
oh, it's just people who aren't as educated
link |
01:05:44.560
as the dots are pulled.
link |
01:05:45.800
When we are often just as gullible also,
link |
01:05:48.240
we read only our media and don't see through things.
link |
01:05:52.080
Anyone who looks at both sides like this
link |
01:05:53.960
and compares a little will immediately start noticing
link |
01:05:56.320
the shenanigans being pulled.
link |
01:05:58.080
And I think what I tried to do with this app
link |
01:06:01.840
is that the big tech has to some extent
link |
01:06:05.760
tried to blame the individual for being manipulated,
link |
01:06:08.960
much like big tobacco tried to blame the individuals
link |
01:06:12.320
entirely for smoking.
link |
01:06:13.680
And then later on, our government stepped up and say,
link |
01:06:16.880
actually, you can't just blame little kids
link |
01:06:19.560
for starting to smoke.
link |
01:06:20.400
We have to have more responsible advertising
link |
01:06:22.400
and this and that.
link |
01:06:23.480
I think it's a bit the same here.
link |
01:06:24.600
It's very convenient for a big tech to blame.
link |
01:06:27.600
So it's just people who are so dumb and get fooled.
link |
01:06:32.160
The blame usually comes in saying,
link |
01:06:34.160
oh, it's just human psychology.
link |
01:06:36.000
People just wanna hear what they already believe.
link |
01:06:38.360
But professor David Rand at MIT actually partly debunked that
link |
01:06:43.160
with a really nice study showing that people
link |
01:06:45.280
tend to be interested in hearing things
link |
01:06:47.640
that go against what they believe,
link |
01:06:49.880
if it's presented in a respectful way.
link |
01:06:52.680
Suppose, for example, that you have a company
link |
01:06:57.560
and you're just about to launch this project
link |
01:06:59.120
and you're convinced it's gonna work.
link |
01:07:00.280
And someone says, you know, Lex,
link |
01:07:03.520
I hate to tell you this, but this is gonna fail.
link |
01:07:05.640
And here's why.
link |
01:07:06.640
Would you be like, shut up, I don't wanna hear it.
link |
01:07:08.920
La, la, la, la, la, la, la, la, la.
link |
01:07:10.640
Would you?
link |
01:07:11.480
You would be interested, right?
link |
01:07:13.000
And also if you're on an airplane,
link |
01:07:16.360
back in the pre COVID times,
link |
01:07:19.000
and the guy next to you
link |
01:07:20.240
is clearly from the opposite side of the political spectrum,
link |
01:07:24.160
but is very respectful and polite to you.
link |
01:07:26.720
Wouldn't you be kind of interested to hear a bit about
link |
01:07:28.840
how he or she thinks about things?
link |
01:07:31.960
Of course.
link |
01:07:32.800
But it's not so easy to find out
link |
01:07:35.360
respectful disagreement now,
link |
01:07:36.760
because like, for example, if you are a Democrat
link |
01:07:40.440
and you're like, oh, I wanna see something
link |
01:07:41.920
on the other side,
link |
01:07:42.760
so you just go Breitbart.com.
link |
01:07:45.080
And then after the first 10 seconds,
link |
01:07:46.960
you feel deeply insulted by something.
link |
01:07:49.400
And they, it's not gonna work.
link |
01:07:52.480
Or if you take someone who votes Republican
link |
01:07:55.640
and they go to something on the left,
link |
01:07:57.400
then they just get very offended very quickly
link |
01:08:00.120
by them having put a deliberately ugly picture
link |
01:08:02.200
of Donald Trump on the front page or something.
link |
01:08:04.320
It doesn't really work.
link |
01:08:05.640
So this news aggregator also has this nuance slider,
link |
01:08:09.800
which you can pull to the right
link |
01:08:11.440
and then sort of make it easier to get exposed
link |
01:08:13.960
to actually more sort of academic style
link |
01:08:16.120
or more respectful,
link |
01:08:17.200
portrayals of different views.
link |
01:08:19.480
And finally, the one kind of bias
link |
01:08:22.080
I think people are mostly aware of is the left, right,
link |
01:08:25.440
because it's so obvious,
link |
01:08:26.280
because both left and right are very powerful here, right?
link |
01:08:30.600
Both of them have well funded TV stations and newspapers,
link |
01:08:33.920
and it's kind of hard to miss.
link |
01:08:35.520
But there's another one, the establishment slider,
link |
01:08:39.000
which is also really fun.
link |
01:08:41.320
I love to play with it.
link |
01:08:42.840
And that's more about corruption.
link |
01:08:44.360
Yeah, yeah.
link |
01:08:45.200
I love that one. Yes.
link |
01:08:47.600
Because if you have a society
link |
01:08:53.240
where almost all the powerful entities
link |
01:08:57.320
want you to believe a certain thing,
link |
01:08:59.480
that's what you're gonna read in both the big media,
link |
01:09:01.840
mainstream media on the left and on the right, of course.
link |
01:09:04.640
And the powerful companies can push back very hard,
link |
01:09:08.200
like tobacco companies push back very hard
link |
01:09:10.160
back in the day when some newspapers
link |
01:09:12.160
started writing articles about tobacco being dangerous,
link |
01:09:15.400
so that it was hard to get a lot of coverage
link |
01:09:17.000
about it initially.
link |
01:09:18.480
And also if you look geopolitically, right,
link |
01:09:20.880
of course, in any country, when you read their media,
link |
01:09:23.120
you're mainly gonna be reading a lot of articles
link |
01:09:24.880
about how our country is the good guy
link |
01:09:27.360
and the other countries are the bad guys, right?
link |
01:09:30.400
So if you wanna have a really more nuanced understanding,
link |
01:09:33.360
like the Germans used to be told that the British
link |
01:09:37.040
used to be told that the French were the bad guys
link |
01:09:38.840
and the French used to be told
link |
01:09:39.880
that the British were the bad guys.
link |
01:09:41.880
Now they visit each other's countries a lot
link |
01:09:45.680
and have a much more nuanced understanding.
link |
01:09:47.360
I don't think there's gonna be any more wars
link |
01:09:48.840
between France and Germany.
link |
01:09:50.120
But on the geopolitical scale,
link |
01:09:53.000
it's just as much as ever, you know,
link |
01:09:54.520
big Cold War, now US, China, and so on.
link |
01:09:57.600
And if you wanna get a more nuanced understanding
link |
01:10:01.200
of what's happening geopolitically,
link |
01:10:03.520
then it's really fun to look at this establishment slider
link |
01:10:05.960
because it turns out there are tons of little newspapers,
link |
01:10:09.360
both on the left and on the right,
link |
01:10:11.360
who sometimes challenge establishment and say,
link |
01:10:14.480
you know, maybe we shouldn't actually invade Iraq right now.
link |
01:10:17.800
Maybe this weapons of mass destruction thing is BS.
link |
01:10:20.400
If you look at the journalism research afterwards,
link |
01:10:23.680
you can actually see that quite clearly.
link |
01:10:25.360
Both CNN and Fox were very pro.
link |
01:10:29.200
Let's get rid of Saddam.
link |
01:10:30.640
There are weapons of mass destruction.
link |
01:10:32.560
Then there were a lot of smaller newspapers.
link |
01:10:34.680
They were like, wait a minute,
link |
01:10:36.200
this evidence seems a bit sketchy and maybe we...
link |
01:10:40.240
But of course they were so hard to find.
link |
01:10:42.240
Most people didn't even know they existed, right?
link |
01:10:44.560
Yet it would have been better for American national security
link |
01:10:47.400
if those voices had also come up.
link |
01:10:50.160
I think it harmed America's national security actually
link |
01:10:52.560
that we invaded Iraq.
link |
01:10:53.800
And arguably there's a lot more interest
link |
01:10:55.560
in that kind of thinking too, from those small sources.
link |
01:11:00.480
So like when you say big,
link |
01:11:02.600
it's more about kind of the reach of the broadcast,
link |
01:11:07.600
but it's not big in terms of the interest.
link |
01:11:12.040
I think there's a lot of interest
link |
01:11:14.120
in that kind of anti establishment
link |
01:11:16.200
or like skepticism towards, you know,
link |
01:11:18.840
out of the box thinking.
link |
01:11:20.360
There's a lot of interest in that kind of thing.
link |
01:11:22.000
Do you see this news project or something like it
link |
01:11:26.920
being basically taken over the world
link |
01:11:30.600
as the main way we consume information?
link |
01:11:32.920
Like how do we get there?
link |
01:11:35.120
Like how do we, you know?
link |
01:11:37.320
So, okay, the idea is brilliant.
link |
01:11:39.000
It's a, you're calling it your little project in 2020,
link |
01:11:44.000
but how does that become the new way we consume information?
link |
01:11:48.480
I hope, first of all, just to plant a little seed there
link |
01:11:51.000
because normally the big barrier of doing anything in media
link |
01:11:55.920
is you need a ton of money, but this costs no money at all.
link |
01:11:59.280
I've just been paying myself.
link |
01:12:00.640
You pay a tiny amount of money each month to Amazon
link |
01:12:03.080
to run the thing in their cloud.
link |
01:12:04.640
We're not, there never will never be any ads.
link |
01:12:06.920
The point is not to make any money off of it.
link |
01:12:09.360
And we just train machine learning algorithms
link |
01:12:11.640
to classify the articles and stuff.
link |
01:12:13.160
So it just kind of runs by itself.
link |
01:12:14.840
So if it actually gets good enough at some point
link |
01:12:17.760
that it starts catching on, it could scale.
link |
01:12:20.720
And if other people carbon copy it
link |
01:12:23.120
and make other versions that are better,
link |
01:12:24.960
that's the more the merrier.
link |
01:12:28.200
I think there's a real opportunity for machine learning
link |
01:12:32.920
to empower the individual against the powerful players.
link |
01:12:39.880
As I said in the beginning here, it's
link |
01:12:41.600
been mostly the other way around so far,
link |
01:12:43.280
that the big players have the AI and then they tell people,
link |
01:12:46.960
this is the truth, this is how it is.
link |
01:12:49.600
But it can just as well go the other way around.
link |
01:12:52.200
And when the internet was born, actually, a lot of people
link |
01:12:54.320
had this hope that maybe this will be
link |
01:12:56.280
a great thing for democracy, make it easier
link |
01:12:58.120
to find out about things.
link |
01:12:59.480
And maybe machine learning and things like this
link |
01:13:02.320
can actually help again.
link |
01:13:03.720
And I have to say, I think it's more important than ever now
link |
01:13:07.080
because this is very linked also to the whole future of life
link |
01:13:12.160
as we discussed earlier.
link |
01:13:13.920
We're getting this ever more powerful tech.
link |
01:13:17.280
Frank, it's pretty clear if you look
link |
01:13:19.040
on the one or two generation, three generation timescale
link |
01:13:21.920
that there are only two ways this can end geopolitically.
link |
01:13:24.880
Either it ends great for all humanity
link |
01:13:27.640
or it ends terribly for all of us.
link |
01:13:31.640
There's really no in between.
link |
01:13:33.680
And we're so stuck in that because technology
link |
01:13:37.560
knows no borders.
link |
01:13:39.080
And you can't have people fighting
link |
01:13:42.200
when the weapons just keep getting ever more
link |
01:13:44.480
powerful indefinitely.
link |
01:13:47.040
Eventually, the luck runs out.
link |
01:13:50.280
And right now we have, I love America,
link |
01:13:55.480
but the fact of the matter is what's good for America
link |
01:13:59.840
is not opposite in the long term to what's
link |
01:14:02.000
good for other countries.
link |
01:14:04.600
It would be if this was some sort of zero sum game
link |
01:14:07.400
like it was thousands of years ago when the only way one
link |
01:14:10.960
country could get more resources was
link |
01:14:13.440
to take land from other countries
link |
01:14:14.960
because that was basically the resource.
link |
01:14:17.640
Look at the map of Europe.
link |
01:14:18.920
Some countries kept getting bigger and smaller,
link |
01:14:21.400
endless wars.
link |
01:14:23.280
But then since 1945, there hasn't been any war
link |
01:14:26.400
in Western Europe.
link |
01:14:27.160
And they all got way richer because of tech.
link |
01:14:29.920
So the optimistic outcome is that the big winner
link |
01:14:34.760
in this century is going to be America and China and Russia
link |
01:14:38.200
and everybody else because technology just makes
link |
01:14:40.200
us all healthier and wealthier.
link |
01:14:41.760
And we just find some way of keeping the peace
link |
01:14:44.680
on this planet.
link |
01:14:46.640
But I think, unfortunately, there
link |
01:14:48.760
are some pretty powerful forces right now
link |
01:14:50.440
that are pushing in exactly the opposite direction
link |
01:14:52.560
and trying to demonize other countries, which just makes
link |
01:14:55.920
it more likely that this ever more powerful tech we're
link |
01:14:58.360
building is going to be used in disastrous ways.
link |
01:15:02.200
Yeah, for aggression versus cooperation,
link |
01:15:04.400
that kind of thing.
link |
01:15:05.200
Yeah, even look at just military AI now.
link |
01:15:09.560
It was so awesome to see these dancing robots.
link |
01:15:12.160
I loved it.
link |
01:15:14.000
But one of the biggest growth areas in robotics
link |
01:15:17.080
now is, of course, autonomous weapons.
link |
01:15:19.480
And 2020 was like the best marketing year
link |
01:15:23.200
ever for autonomous weapons.
link |
01:15:24.400
Because in both Libya, it's a civil war,
link |
01:15:27.520
and in Nagorno Karabakh, they made the decisive difference.
link |
01:15:34.440
And everybody else is watching this.
link |
01:15:36.280
Oh, yeah, we want to build autonomous weapons, too.
link |
01:15:38.920
In Libya, you had, on one hand, our ally,
link |
01:15:45.080
the United Arab Emirates that were flying
link |
01:15:47.080
their autonomous weapons that they bought from China,
link |
01:15:50.640
bombing Libyans.
link |
01:15:51.880
And on the other side, you had our other ally, Turkey,
link |
01:15:54.280
flying their drones.
link |
01:15:57.200
And they had no skin in the game,
link |
01:16:00.480
any of these other countries.
link |
01:16:01.680
And of course, it was the Libyans who really got screwed.
link |
01:16:04.160
In Nagorno Karabakh, you had actually, again,
link |
01:16:09.280
Turkey is sending drones built by this company that
link |
01:16:12.400
was actually founded by a guy who went to MIT AeroAstro.
link |
01:16:17.080
Do you know that?
link |
01:16:17.800
No.
link |
01:16:18.280
Bacratyar.
link |
01:16:18.960
Yeah.
link |
01:16:19.520
So MIT has a direct responsibility
link |
01:16:21.480
for ultimately this.
link |
01:16:22.680
And a lot of civilians were killed there.
link |
01:16:25.680
So because it was militarily so effective,
link |
01:16:29.640
now suddenly there's a huge push.
link |
01:16:31.240
Oh, yeah, yeah, let's go build ever more autonomy
link |
01:16:35.680
into these weapons, and it's going to be great.
link |
01:16:39.440
And I think, actually, people who
link |
01:16:44.640
are obsessed about some sort of future Terminator scenario
link |
01:16:47.760
right now should start focusing on the fact
link |
01:16:51.640
that we have two much more urgent threats happening
link |
01:16:54.000
from machine learning.
link |
01:16:54.960
One of them is the whole destruction of democracy
link |
01:16:57.880
that we've talked about now, where
link |
01:17:01.600
our flow of information is being manipulated
link |
01:17:03.560
by machine learning.
link |
01:17:04.400
And the other one is that right now,
link |
01:17:06.960
this is the year when the big arms race and out of control
link |
01:17:10.440
arms race in at least Thomas Weapons is going to start,
link |
01:17:12.800
or it's going to stop.
link |
01:17:14.640
So you have a sense that there is like 2020
link |
01:17:18.480
was an instrumental catalyst for the autonomous weapons race.
link |
01:17:24.280
Yeah, because it was the first year when they proved
link |
01:17:26.560
decisive in the battlefield.
link |
01:17:28.360
And these ones are still not fully autonomous, mostly.
link |
01:17:31.400
They're remote controlled, right?
link |
01:17:32.640
But we could very quickly make things
link |
01:17:38.720
about the size and cost of a smartphone, which you just put
link |
01:17:43.280
in the GPS coordinates or the face of the one
link |
01:17:45.160
you want to kill, a skin color or whatever,
link |
01:17:47.000
and it flies away and does it.
link |
01:17:48.480
And the real good reason why the US and all
link |
01:17:53.920
the other superpowers should put the kibosh on this
link |
01:17:57.040
is the same reason we decided to put the kibosh on bioweapons.
link |
01:18:01.680
So we gave the Future of Life Award
link |
01:18:05.000
that we can talk more about later to Matthew Messelson
link |
01:18:07.200
from Harvard before for convincing
link |
01:18:08.680
Nixon to ban bioweapons.
link |
01:18:10.320
And I asked him, how did you do it?
link |
01:18:13.600
And he was like, well, I just said, look,
link |
01:18:16.560
we don't want there to be a $500 weapon of mass destruction
link |
01:18:20.520
that all our enemies can afford, even nonstate actors.
link |
01:18:26.560
And Nixon was like, good point.
link |
01:18:32.120
It's in America's interest that the powerful weapons are all
link |
01:18:34.520
really expensive, so only we can afford them,
link |
01:18:37.600
or maybe some more stable adversaries, right?
link |
01:18:41.080
Nuclear weapons are like that.
link |
01:18:42.960
But bioweapons were not like that.
link |
01:18:44.920
That's why we banned them.
link |
01:18:46.400
And that's why you never hear about them now.
link |
01:18:48.400
That's why we love biology.
link |
01:18:50.280
So you have a sense that it's possible for the big power
link |
01:18:55.440
houses in terms of the big nations in the world
link |
01:18:58.480
to agree that autonomous weapons is not a race we want to be on,
link |
01:19:02.360
that it doesn't end well.
link |
01:19:03.680
Yeah, because we know it's just going
link |
01:19:05.320
to end in mass proliferation.
link |
01:19:06.560
And every terrorist everywhere is
link |
01:19:08.560
going to have these super cheap weapons
link |
01:19:10.280
that they will use against us.
link |
01:19:13.440
And our politicians have to constantly worry
link |
01:19:15.960
about being assassinated every time they go outdoors
link |
01:19:18.240
by some anonymous little mini drone.
link |
01:19:21.040
We don't want that.
link |
01:19:21.840
And even if the US and China and everyone else
link |
01:19:25.920
could just agree that you can only
link |
01:19:27.840
build these weapons if they cost at least $10 million,
link |
01:19:31.560
that would be a huge win for the superpowers
link |
01:19:34.760
and, frankly, for everybody.
link |
01:19:38.800
And people often push back and say, well, it's
link |
01:19:41.000
so hard to prevent cheating.
link |
01:19:43.200
But hey, you could say the same about bioweapons.
link |
01:19:45.800
Take any of your MIT colleagues in biology.
link |
01:19:49.360
Of course, they could build some nasty bioweapon
link |
01:19:52.000
if they really wanted to.
link |
01:19:53.560
But first of all, they don't want to
link |
01:19:55.280
because they think it's disgusting because of the stigma.
link |
01:19:57.640
And second, even if there's some sort of nutcase and want to,
link |
01:20:02.120
it's very likely that some of their grad students
link |
01:20:04.160
or someone would rat them out because everyone else thinks
link |
01:20:06.560
it's so disgusting.
link |
01:20:08.000
And in fact, we now know there was even a fair bit of cheating
link |
01:20:11.480
on the bioweapons ban.
link |
01:20:13.480
But no countries used them because it was so stigmatized
link |
01:20:17.520
that it just wasn't worth revealing that they had cheated.
link |
01:20:22.400
You talk about drones, but you kind of
link |
01:20:24.840
think that drones is a remote operation.
link |
01:20:28.960
Which they are, mostly, still.
link |
01:20:30.680
But you're not taking the next intellectual step
link |
01:20:34.600
of where does this go.
link |
01:20:36.320
You're kind of saying the problem with drones
link |
01:20:38.760
is that you're removing yourself from direct violence.
link |
01:20:42.400
Therefore, you're not able to sort of maintain
link |
01:20:44.920
the common humanity required to make
link |
01:20:46.720
the proper decisions strategically.
link |
01:20:48.720
But that's the criticism as opposed to like,
link |
01:20:51.360
if this is automated, and just exactly as you said,
link |
01:20:55.520
if you automate it and there's a race,
link |
01:20:58.640
then the technology's gonna get better and better and better
link |
01:21:01.280
which means getting cheaper and cheaper and cheaper.
link |
01:21:03.720
And unlike, perhaps, nuclear weapons
link |
01:21:06.080
which is connected to resources in a way,
link |
01:21:10.240
like it's hard to engineer, yeah.
link |
01:21:13.760
It feels like there's too much overlap
link |
01:21:17.600
between the tech industry and autonomous weapons
link |
01:21:20.400
to where you could have smartphone type of cheapness.
link |
01:21:24.400
If you look at drones, for $1,000,
link |
01:21:29.280
you can have an incredible system
link |
01:21:30.800
that's able to maintain flight autonomously for you
link |
01:21:34.600
and take pictures and stuff.
link |
01:21:36.240
You could see that going into the autonomous weapons space
link |
01:21:39.440
that's, but why is that not thought about
link |
01:21:43.240
or discussed enough in the public, do you think?
link |
01:21:45.640
You see those dancing Boston Dynamics robots
link |
01:21:48.960
and everybody has this kind of,
link |
01:21:52.600
as if this is like a far future.
link |
01:21:55.360
They have this fear like, oh, this'll be Terminator
link |
01:21:58.640
in like some, I don't know, unspecified 20, 30, 40 years.
link |
01:22:03.080
And they don't think about, well, this is like
link |
01:22:05.640
some much less dramatic version of that
link |
01:22:09.120
is actually happening now.
link |
01:22:11.160
It's not gonna be legged, it's not gonna be dancing,
link |
01:22:14.840
but it already has the capability
link |
01:22:17.160
to use artificial intelligence to kill humans.
link |
01:22:20.240
Yeah, the Boston Dynamics legged robots,
link |
01:22:22.880
I think the reason we imagine them holding guns
link |
01:22:24.960
is just because you've all seen Arnold Schwarzenegger, right?
link |
01:22:28.440
That's our reference point.
link |
01:22:30.600
That's pretty useless.
link |
01:22:32.680
That's not gonna be the main military use of them.
link |
01:22:35.360
They might be useful in law enforcement in the future
link |
01:22:38.720
and then there's a whole debate about,
link |
01:22:40.280
do you want robots showing up at your house with guns
link |
01:22:42.640
telling you who'll be perfectly obedient
link |
01:22:45.440
to whatever dictator controls them?
link |
01:22:47.560
But let's leave that aside for a moment
link |
01:22:49.240
and look at what's actually relevant now.
link |
01:22:51.320
So there's a spectrum of things you can do
link |
01:22:54.760
with AI in the military.
link |
01:22:55.760
And again, to put my card on the table,
link |
01:22:57.560
I'm not the pacifist, I think we should have good defense.
link |
01:23:03.480
So for example, a predator drone is basically
link |
01:23:08.480
a fancy little remote controlled airplane, right?
link |
01:23:11.720
There's a human piloting it and the decision ultimately
link |
01:23:16.040
about whether to kill somebody with it
link |
01:23:17.280
is made by a human still.
link |
01:23:19.400
And this is a line I think we should never cross.
link |
01:23:23.880
There's a current DOD policy.
link |
01:23:25.880
Again, you have to have a human in the loop.
link |
01:23:27.920
I think algorithms should never make life
link |
01:23:30.680
or death decisions, they should be left to humans.
link |
01:23:34.120
Now, why might we cross that line?
link |
01:23:37.720
Well, first of all, these are expensive, right?
link |
01:23:40.520
So for example, when Azerbaijan had all these drones
link |
01:23:46.560
and Armenia didn't have any, they start trying
link |
01:23:48.280
to jerry rig little cheap things, fly around.
link |
01:23:51.760
But then of course, the Armenians would jam them
link |
01:23:54.040
or the Azeris would jam them.
link |
01:23:55.600
And remote control things can be jammed,
link |
01:23:58.320
that makes them inferior.
link |
01:24:00.040
Also, there's a bit of a time delay between,
link |
01:24:02.960
if we're piloting something from far away,
link |
01:24:05.400
speed of light, and the human has a reaction time as well,
link |
01:24:08.640
it would be nice to eliminate that jamming possibility
link |
01:24:11.560
in the time that they by having it fully autonomous.
link |
01:24:14.320
But now you might be, so then if you do,
link |
01:24:17.080
but now you might be crossing that exact line.
link |
01:24:19.400
You might program it to just, oh yeah, the air drone,
link |
01:24:22.360
go hover over this country for a while
link |
01:24:25.280
and whenever you find someone who is a bad guy,
link |
01:24:28.760
kill them.
link |
01:24:30.960
Now the machine is making these sort of decisions
link |
01:24:33.480
and some people who defend this still say,
link |
01:24:36.120
well, that's morally fine because we are the good guys
link |
01:24:39.960
and we will tell it the definition of bad guy
link |
01:24:43.000
that we think is moral.
link |
01:24:45.640
But now it would be very naive to think
link |
01:24:48.720
that if ISIS buys that same drone,
link |
01:24:51.480
that they're gonna use our definition of bad guy.
link |
01:24:54.040
Maybe for them, bad guy is someone wearing
link |
01:24:55.840
a US army uniform or maybe there will be some,
link |
01:25:00.840
weird ethnic group who decides that someone
link |
01:25:04.680
of another ethnic group, they are the bad guys, right?
link |
01:25:07.160
The thing is human soldiers with all our faults,
link |
01:25:11.320
we still have some basic wiring in us.
link |
01:25:14.080
Like, no, it's not okay to kill kids and civilians.
link |
01:25:20.040
And Thomas Weprin has none of that.
link |
01:25:21.760
It's just gonna do whatever is programmed.
link |
01:25:23.600
It's like the perfect Adolf Eichmann on steroids.
link |
01:25:27.720
Like they told him, Adolf Eichmann, you know,
link |
01:25:30.840
he wanted to do this and this and this
link |
01:25:32.240
to make the Holocaust more efficient.
link |
01:25:33.680
And he was like, yeah, and off he went and did it, right?
link |
01:25:37.840
Do we really wanna make machines that are like that,
link |
01:25:41.120
like completely amoral and we'll take the user's definition
link |
01:25:44.240
of who is the bad guy?
link |
01:25:45.720
And do we then wanna make them so cheap
link |
01:25:47.920
that all our adversaries can have them?
link |
01:25:49.640
Like what could possibly go wrong?
link |
01:25:52.720
That's I think the big ordeal of the whole thing.
link |
01:25:56.720
I think the big argument for why we wanna,
link |
01:26:00.200
this year really put the kibosh on this.
link |
01:26:03.520
And I think you can tell there's a lot
link |
01:26:06.360
of very active debate even going on within the US military
link |
01:26:10.120
and undoubtedly in other militaries around the world also
link |
01:26:13.080
about whether we should have some sort
link |
01:26:14.200
of international agreement to at least require
link |
01:26:16.760
that these weapons have to be above a certain size
link |
01:26:20.640
and cost, you know, so that things just don't totally spiral
link |
01:26:27.320
out of control.
link |
01:26:29.800
And finally, just for your question,
link |
01:26:31.600
but is it possible to stop it?
link |
01:26:33.560
Because some people tell me, oh, just give up, you know.
link |
01:26:37.000
But again, so Matthew Messelsen again from Harvard, right,
link |
01:26:41.560
who the bioweapons hero, he had exactly this criticism
link |
01:26:46.640
also with bioweapons.
link |
01:26:47.760
People were like, how can you check for sure
link |
01:26:49.920
that the Russians aren't cheating?
link |
01:26:52.960
And he told me this, I think really ingenious insight.
link |
01:26:58.560
He said, you know, Max, some people
link |
01:27:01.200
think you have to have inspections and things
link |
01:27:03.640
and you have to make sure that you can catch the cheaters
link |
01:27:06.760
with 100% chance.
link |
01:27:08.960
You don't need 100%, he said.
link |
01:27:10.800
1% is usually enough.
link |
01:27:14.080
Because if it's another big state,
link |
01:27:19.240
suppose China and the US have signed the treaty drawing
link |
01:27:23.480
a certain line and saying, yeah, these kind of drones are OK,
link |
01:27:26.200
but these fully autonomous ones are not.
link |
01:27:28.800
Now suppose you are China and you have cheated and secretly
link |
01:27:34.400
developed some clandestine little thing
link |
01:27:36.000
or you're thinking about doing it.
link |
01:27:37.560
What's your calculation that you do?
link |
01:27:39.200
Well, you're like, OK, what's the probability
link |
01:27:41.880
that we're going to get caught?
link |
01:27:44.920
If the probability is 100%, of course, we're not going to do it.
link |
01:27:49.120
But if the probability is 5% that we're going to get caught,
link |
01:27:52.720
then it's going to be like a huge embarrassment for us.
link |
01:27:55.560
And we still have our nuclear weapons anyway,
link |
01:28:00.120
so it doesn't really make an enormous difference in terms
link |
01:28:05.160
of deterring the US.
link |
01:28:07.520
And that feeds the stigma that you kind of established,
link |
01:28:11.640
like this fabric, this universal stigma over the thing.
link |
01:28:14.720
Exactly.
link |
01:28:15.520
It's very reasonable for them to say, well, we probably
link |
01:28:18.080
get away with it.
link |
01:28:18.800
If we don't, then the US will know we cheated,
link |
01:28:21.320
and then they're going to go full tilt with their program
link |
01:28:23.720
and say, look, the Chinese are cheaters,
link |
01:28:25.000
and now we have all these weapons against us,
link |
01:28:27.080
and that's bad.
link |
01:28:27.920
So the stigma alone is very, very powerful.
link |
01:28:32.160
And again, look what happened with bioweapons.
link |
01:28:34.520
It's been 50 years now.
link |
01:28:36.880
When was the last time you read about a bioterrorism attack?
link |
01:28:40.120
The only deaths I really know about with bioweapons
link |
01:28:42.680
that have happened when we Americans managed
link |
01:28:45.200
to kill some of our own with anthrax,
link |
01:28:47.200
or the idiot who sent them to Tom Daschle and others
link |
01:28:49.760
in letters, right?
link |
01:28:50.880
And similarly in Sverdlovsk in the Soviet Union,
link |
01:28:55.960
they had some anthrax in some lab there.
link |
01:28:57.960
Maybe they were cheating or who knows,
link |
01:29:00.000
and it leaked out and killed a bunch of Russians.
link |
01:29:02.520
I'd say that's a pretty good success, right?
link |
01:29:04.480
50 years, just two own goals by the superpowers,
link |
01:29:08.360
and then nothing.
link |
01:29:09.560
And that's why whenever I ask anyone
link |
01:29:12.120
what they think about biology, they think it's great.
link |
01:29:15.160
They associate it with new cures, new diseases,
link |
01:29:18.080
maybe a good vaccine.
link |
01:29:19.720
This is how I want to think about AI in the future.
link |
01:29:22.680
And I want others to think about AI too,
link |
01:29:24.840
as a source of all these great solutions to our problems,
link |
01:29:27.840
not as, oh, AI, oh yeah, that's the reason
link |
01:29:31.920
I feel scared going outside these days.
link |
01:29:34.600
Yeah, it's kind of brilliant that bioweapons
link |
01:29:37.920
and nuclear weapons, we've figured out,
link |
01:29:40.760
I mean, of course there's still a huge source of danger,
link |
01:29:43.320
but we figured out some way of creating rules
link |
01:29:47.760
and social stigma over these weapons
link |
01:29:51.440
that then creates a stability to our,
link |
01:29:54.600
whatever that game theoretic stability that occurs.
link |
01:29:57.640
And we don't have that with AI,
link |
01:29:59.200
and you're kind of screaming from the top of the mountain
link |
01:30:03.760
about this, that we need to find that
link |
01:30:05.520
because it's very possible with the future of life,
link |
01:30:10.520
as you point out, Institute Awards pointed out
link |
01:30:15.000
that with nuclear weapons,
link |
01:30:17.920
we could have destroyed ourselves quite a few times.
link |
01:30:21.040
And it's a learning experience that is very costly.
link |
01:30:28.520
We gave this Future Life Award,
link |
01:30:30.960
we gave it the first time to this guy, Vasily Arkhipov.
link |
01:30:34.640
He was on, most people haven't even heard of him.
link |
01:30:37.480
Yeah, can you say who he is?
link |
01:30:38.640
Vasily Arkhipov, he has, in my opinion,
link |
01:30:44.080
made the greatest positive contribution to humanity
link |
01:30:47.480
of any human in modern history.
link |
01:30:50.200
And maybe it sounds like hyperbole here,
link |
01:30:51.880
like I'm just over the top,
link |
01:30:53.320
but let me tell you the story and I think maybe you'll agree.
link |
01:30:56.080
So during the Cuban Missile Crisis,
link |
01:31:00.000
we Americans first didn't know
link |
01:31:01.800
that the Russians had sent four submarines,
link |
01:31:05.160
but we caught two of them.
link |
01:31:06.720
And we didn't know that,
link |
01:31:09.160
so we dropped practice depth charges
link |
01:31:11.040
on the one that he was on,
link |
01:31:12.360
try to force it to the surface.
link |
01:31:15.440
But we didn't know that this nuclear submarine
link |
01:31:17.680
actually was a nuclear submarine with a nuclear torpedo.
link |
01:31:20.560
We also didn't know that they had authorization
link |
01:31:22.640
to launch it without clearance from Moscow.
link |
01:31:25.120
And we also didn't know
link |
01:31:26.040
that they were running out of electricity.
link |
01:31:28.240
Their batteries were almost dead.
link |
01:31:29.880
They were running out of oxygen.
link |
01:31:31.840
Sailors were fainting left and right.
link |
01:31:34.280
The temperature was about 110, 120 Fahrenheit on board.
link |
01:31:39.120
It was really hellish conditions,
link |
01:31:40.920
really just a kind of doomsday.
link |
01:31:43.240
And at that point,
link |
01:31:44.520
these giant explosions start happening
link |
01:31:46.280
from the Americans dropping these.
link |
01:31:48.160
The captain thought World War III had begun.
link |
01:31:50.680
They decided they were gonna launch the nuclear torpedo.
link |
01:31:53.720
And one of them shouted,
link |
01:31:55.360
we're all gonna die,
link |
01:31:56.200
but we're not gonna disgrace our Navy.
link |
01:31:58.920
We don't know what would have happened
link |
01:32:00.120
if there had been a giant mushroom cloud all of a sudden
link |
01:32:03.400
against the Americans.
link |
01:32:04.760
But since everybody had their hands on the triggers,
link |
01:32:09.200
you don't have to be too creative to think
link |
01:32:10.800
that it could have led to an all out nuclear war,
link |
01:32:13.080
in which case we wouldn't be having this conversation now.
link |
01:32:15.680
What actually took place was
link |
01:32:17.600
they needed three people to approve this.
link |
01:32:21.040
The captain had said yes.
link |
01:32:22.200
There was the Communist Party political officer.
link |
01:32:24.120
He also said, yes, let's do it.
link |
01:32:26.000
And the third man was this guy, Vasily Arkhipov,
link |
01:32:29.040
who said, no.
link |
01:32:29.880
For some reason, he was just more chill than the others
link |
01:32:32.720
and he was the right man at the right time.
link |
01:32:34.240
I don't want us as a species rely on the right person
link |
01:32:38.120
being there at the right time, you know.
link |
01:32:40.720
We tracked down his family
link |
01:32:42.920
living in relative poverty outside Moscow.
link |
01:32:47.320
When he flew his daughter,
link |
01:32:48.800
he had passed away and flew them to London.
link |
01:32:52.720
They had never been to the West even.
link |
01:32:54.000
It was incredibly moving to get to honor them for this.
link |
01:32:57.160
The next year we gave them a medal.
link |
01:32:59.320
The next year we gave this Future Life Award
link |
01:33:01.800
to Stanislav Petrov.
link |
01:33:04.160
Have you heard of him?
link |
01:33:05.000
Yes.
link |
01:33:05.840
So he was in charge of the Soviet early warning station,
link |
01:33:10.000
which was built with Soviet technology
link |
01:33:12.880
and honestly not that reliable.
link |
01:33:14.760
It said that there were five US missiles coming in.
link |
01:33:18.280
Again, if they had launched at that point,
link |
01:33:21.440
we probably wouldn't be having this conversation.
link |
01:33:23.440
He decided based on just mainly gut instinct
link |
01:33:29.440
to just not escalate this.
link |
01:33:32.640
And I'm very glad he wasn't replaced by an AI
link |
01:33:35.200
that was just automatically following orders.
link |
01:33:37.600
And then we gave the third one to Matthew Messelson.
link |
01:33:39.840
Last year, we gave this award to these guys
link |
01:33:44.360
who actually use technology for good,
link |
01:33:46.760
not avoiding something bad, but for something good.
link |
01:33:50.120
The guys who eliminated this disease,
link |
01:33:52.120
it was way worse than COVID that had killed
link |
01:33:55.440
half a billion people in its final century.
link |
01:33:58.520
Smallpox, right?
link |
01:33:59.440
So you mentioned it earlier.
link |
01:34:01.200
COVID on average kills less than 1% of people who get it.
link |
01:34:05.320
Smallpox, about 30%.
link |
01:34:08.240
And they just ultimately, Viktor Zhdanov and Bill Foege,
link |
01:34:14.160
most of my colleagues have never heard of either of them,
link |
01:34:17.560
one American, one Russian, they did this amazing effort
link |
01:34:22.080
not only was Zhdanov able to get the US and the Soviet Union
link |
01:34:25.200
to team up against smallpox during the Cold War,
link |
01:34:27.920
but Bill Foege came up with this ingenious strategy
link |
01:34:30.320
for making it actually go all the way
link |
01:34:32.840
to defeat the disease without funding
link |
01:34:36.560
for vaccinating everyone.
link |
01:34:37.600
And as a result, we haven't had any,
link |
01:34:40.040
we went from 15 million deaths the year
link |
01:34:42.680
I was born in smallpox.
link |
01:34:44.280
So what do we have in COVID now?
link |
01:34:45.640
A little bit short of 2 million, right?
link |
01:34:47.240
Yes.
link |
01:34:48.120
To zero deaths, of course, this year and forever.
link |
01:34:52.040
There have been 200 million people,
link |
01:34:53.960
we estimate, who would have died since then by smallpox
link |
01:34:57.200
had it not been for this.
link |
01:34:58.080
So isn't science awesome when you use it for good?
link |
01:35:02.160
The reason we wanna celebrate these sort of people
link |
01:35:04.280
is to remind them of this.
link |
01:35:05.680
Science is so awesome when you use it for good.
link |
01:35:10.160
And those awards actually, the variety there,
link |
01:35:13.520
it's a very interesting picture.
link |
01:35:14.920
So the first two are looking at,
link |
01:35:19.360
it's kind of exciting to think that these average humans
link |
01:35:22.680
in some sense, they're products of billions
link |
01:35:26.200
of other humans that came before them, evolution,
link |
01:35:30.200
and some little, you said gut,
link |
01:35:33.360
but there's something in there
link |
01:35:35.320
that stopped the annihilation of the human race.
link |
01:35:41.080
And that's a magical thing,
link |
01:35:43.040
but that's like this deeply human thing.
link |
01:35:45.240
And then there's the other aspect
link |
01:35:47.400
where that's also very human,
link |
01:35:49.800
which is to build solution
link |
01:35:51.440
to the existential crises that we're facing,
link |
01:35:55.240
like to build it, to take the responsibility
link |
01:35:57.520
and to come up with different technologies and so on.
link |
01:36:00.600
And both of those are deeply human,
link |
01:36:04.080
the gut and the mind, whatever that is that creates.
link |
01:36:07.400
The best is when they work together.
link |
01:36:08.640
Arkhipov, I wish I could have met him, of course,
link |
01:36:11.400
but he had passed away.
link |
01:36:13.200
He was really a fantastic military officer,
link |
01:36:16.720
combining all the best traits
link |
01:36:18.680
that we in America admire in our military.
link |
01:36:21.000
Because first of all, he was very loyal, of course.
link |
01:36:23.160
He never even told anyone about this during his whole life,
link |
01:36:26.280
even though you think he had some bragging rights, right?
link |
01:36:28.440
But he just was like, this is just business,
link |
01:36:30.000
just doing my job.
link |
01:36:31.560
It only came out later after his death.
link |
01:36:34.320
And second, the reason he did the right thing
link |
01:36:37.120
was not because he was some sort of liberal
link |
01:36:39.240
or some sort of, not because he was just,
link |
01:36:43.960
oh, peace and love.
link |
01:36:47.360
It was partly because he had been the captain
link |
01:36:49.800
on another submarine that had a nuclear reactor meltdown.
link |
01:36:53.080
And it was his heroism that helped contain this.
link |
01:36:58.000
That's why he died of cancer later also.
link |
01:36:59.760
But he had seen many of his crew members die.
link |
01:37:01.480
And I think for him, that gave him this gut feeling
link |
01:37:04.160
that if there's a nuclear war
link |
01:37:06.200
between the US and the Soviet Union,
link |
01:37:08.400
the whole world is gonna go through
link |
01:37:11.080
what I saw my dear crew members suffer through.
link |
01:37:13.760
It wasn't just an abstract thing for him.
link |
01:37:15.840
I think it was real.
link |
01:37:17.680
And second though, not just the gut, the mind, right?
link |
01:37:20.640
He was, for some reason, very levelheaded personality
link |
01:37:23.960
and very smart guy,
link |
01:37:25.960
which is exactly what we want our best fighter pilots
link |
01:37:29.240
to be also, right?
link |
01:37:30.120
I never forget Neil Armstrong when he's landing on the moon
link |
01:37:32.880
and almost running out of gas.
link |
01:37:34.560
And he doesn't even change when they say 30 seconds,
link |
01:37:37.440
he doesn't even change the tone of voice, just keeps going.
link |
01:37:39.680
Arkhipov, I think was just like that.
link |
01:37:41.840
So when the explosions start going off
link |
01:37:43.480
and his captain is screaming and we should nuke them
link |
01:37:45.520
and all, he's like,
link |
01:37:50.960
I don't think the Americans are trying to sink us.
link |
01:37:54.280
I think they're trying to send us a message.
link |
01:37:58.080
That's pretty bad ass.
link |
01:37:59.200
Yes.
link |
01:38:00.040
Coolness, because he said, if they wanted to sink us,
link |
01:38:03.680
and he said, listen, listen, it's alternating
link |
01:38:06.920
one loud explosion on the left, one on the right,
link |
01:38:10.160
one on the left, one on the right.
link |
01:38:12.120
He was the only one who noticed this pattern.
link |
01:38:15.840
And he's like, I think this is,
link |
01:38:17.880
I'm trying to send us a signal
link |
01:38:20.640
that they want it to surface
link |
01:38:22.840
and they're not gonna sink us.
link |
01:38:25.800
And somehow,
link |
01:38:29.320
this is how he then managed it ultimately
link |
01:38:32.160
with his combination of gut
link |
01:38:34.640
and also just cool analytical thinking,
link |
01:38:37.960
was able to deescalate the whole thing.
link |
01:38:40.120
And yeah, so this is some of the best in humanity.
link |
01:38:44.240
I guess coming back to what we talked about earlier,
link |
01:38:45.880
it's the combination of the neural network,
link |
01:38:47.400
the instinctive, with, I'm getting teary up here,
link |
01:38:50.960
getting emotional, but he was just,
link |
01:38:53.240
he is one of my superheroes,
link |
01:38:56.120
having both the heart and the mind combined.
link |
01:39:00.440
And especially in that time, there's something about the,
link |
01:39:03.760
I mean, this is a very, in America,
link |
01:39:05.440
people are used to this kind of idea
link |
01:39:06.880
of being the individual of like on your own thinking.
link |
01:39:12.040
I think under, in the Soviet Union under communism,
link |
01:39:15.480
it's actually much harder to do that.
link |
01:39:17.600
Oh yeah, he didn't even, he even got,
link |
01:39:19.960
he didn't get any accolades either
link |
01:39:21.840
when he came back for this, right?
link |
01:39:24.240
They just wanted to hush the whole thing up.
link |
01:39:25.880
Yeah, there's echoes of that with Chernobyl,
link |
01:39:28.000
there's all kinds of,
link |
01:39:30.920
that's one, that's a really hopeful thing
link |
01:39:34.400
that amidst big centralized powers,
link |
01:39:37.520
whether it's companies or states,
link |
01:39:39.920
there's still the power of the individual
link |
01:39:42.480
to think on their own, to act.
link |
01:39:43.880
But I think we need to think of people like this,
link |
01:39:46.880
not as a panacea we can always count on,
link |
01:39:50.160
but rather as a wake up call.
link |
01:39:55.720
So because of them, because of Arkhipov,
link |
01:39:58.560
we are alive to learn from this lesson,
link |
01:40:01.320
to learn from the fact that we shouldn't keep playing
link |
01:40:03.120
Russian roulette and almost have a nuclear war
link |
01:40:04.840
by mistake now and then,
link |
01:40:06.600
because relying on luck is not a good longterm strategy.
link |
01:40:09.600
If you keep playing Russian roulette over and over again,
link |
01:40:11.360
the probability of surviving just drops exponentially
link |
01:40:13.560
with time.
link |
01:40:14.400
Yeah.
link |
01:40:15.240
And if you have some probability
link |
01:40:16.680
of having an accidental nuke war every year,
link |
01:40:18.640
the probability of not having one also drops exponentially.
link |
01:40:21.200
I think we can do better than that.
link |
01:40:22.840
So I think the message is very clear,
link |
01:40:26.000
once in a while shit happens,
link |
01:40:27.840
and there's a lot of very concrete things we can do
link |
01:40:31.320
to reduce the risk of things like that happening
link |
01:40:34.920
in the first place.
link |
01:40:36.520
On the AI front, if we just link on that for a second.
link |
01:40:39.600
Yeah.
link |
01:40:40.960
So you're friends with, you often talk with Elon Musk
link |
01:40:44.120
throughout history, you've did a lot
link |
01:40:46.680
of interesting things together.
link |
01:40:48.680
He has a set of fears about the future
link |
01:40:52.280
of artificial intelligence, AGI.
link |
01:40:55.840
Do you have a sense, we've already talked about
link |
01:40:59.720
the things we should be worried about with AI,
link |
01:41:01.560
do you have a sense of the shape of his fears
link |
01:41:04.040
in particular about AI,
link |
01:41:06.880
of which subset of what we've talked about,
link |
01:41:10.160
whether it's creating, it's that direction
link |
01:41:14.480
of creating sort of these giant competition systems
link |
01:41:17.520
that are not explainable,
link |
01:41:19.160
they're not intelligible intelligence,
link |
01:41:21.800
or is it the...
link |
01:41:26.720
And then like as a branch of that,
link |
01:41:28.840
is it the manipulation by big corporations of that
link |
01:41:31.840
or individual evil people to use that for destruction
link |
01:41:35.400
or the unintentional consequences?
link |
01:41:37.480
Do you have a sense of where his thinking is on this?
link |
01:41:40.280
From my many conversations with Elon,
link |
01:41:42.440
yeah, I certainly have a model of how he thinks.
link |
01:41:47.400
It's actually very much like the way I think also,
link |
01:41:49.880
I'll elaborate on it a bit.
link |
01:41:51.080
I just wanna push back on when you said evil people,
link |
01:41:54.680
I don't think it's a very helpful concept.
link |
01:41:58.520
Evil people, sometimes people do very, very bad things,
link |
01:42:02.320
but they usually do it because they think it's a good thing
link |
01:42:05.440
because somehow other people had told them
link |
01:42:07.760
that that was a good thing
link |
01:42:08.640
or given them incorrect information or whatever, right?
link |
01:42:15.440
I believe in the fundamental goodness of humanity
link |
01:42:18.400
that if we educate people well
link |
01:42:21.680
and they find out how things really are,
link |
01:42:24.240
people generally wanna do good and be good.
link |
01:42:27.240
Hence the value alignment,
link |
01:42:30.360
as opposed to it's about information, about knowledge,
link |
01:42:33.660
and then once we have that,
link |
01:42:35.320
we'll likely be able to do good
link |
01:42:39.960
in the way that's aligned with everybody else
link |
01:42:41.600
who thinks differently.
link |
01:42:42.440
Yeah, and it's not just the individual people
link |
01:42:44.000
we have to align.
link |
01:42:44.960
So we don't just want people to be educated
link |
01:42:49.600
to know the way things actually are
link |
01:42:51.200
and to treat each other well,
link |
01:42:53.200
but we also need to align other nonhuman entities.
link |
01:42:56.280
We talked about corporations, there has to be institutions
link |
01:42:58.560
so that what they do is actually good
link |
01:42:59.960
for the country they're in
link |
01:43:00.880
and we should align, make sure that what countries do
link |
01:43:03.480
is actually good for the species as a whole, et cetera.
link |
01:43:07.780
Coming back to Elon,
link |
01:43:08.680
yeah, my understanding of how Elon sees this
link |
01:43:13.600
is really quite similar to my own,
link |
01:43:15.240
which is one of the reasons I like him so much
link |
01:43:18.200
and enjoy talking with him so much.
link |
01:43:19.320
I feel he's quite different from most people
link |
01:43:22.960
in that he thinks much more than most people
link |
01:43:27.720
about the really big picture,
link |
01:43:29.840
not just what's gonna happen in the next election cycle,
link |
01:43:32.540
but in millennia, millions and billions of years from now.
link |
01:43:36.840
And when you look in this more cosmic perspective,
link |
01:43:39.280
it's so obvious that we are gazing out into this universe
link |
01:43:43.080
that as far as we can tell is mostly dead
link |
01:43:46.280
with life being almost imperceptibly tiny perturbation,
link |
01:43:49.800
and he sees this enormous opportunity
link |
01:43:52.640
for our universe to come alive,
link |
01:43:54.280
first to become an interplanetary species.
link |
01:43:56.480
Mars is obviously just first stop on this cosmic journey.
link |
01:44:02.120
And precisely because he thinks more long term,
link |
01:44:06.760
it's much more clear to him than to most people
link |
01:44:09.560
that what we do with this Russian roulette thing
link |
01:44:11.340
we keep playing with our nukes is a really poor strategy,
link |
01:44:15.320
really reckless strategy.
link |
01:44:16.720
And also that we're just building
link |
01:44:18.620
these ever more powerful AI systems that we don't understand
link |
01:44:21.640
is also just a really reckless strategy.
link |
01:44:23.840
I feel Elon is very much a humanist
link |
01:44:26.640
in the sense that he wants an awesome future for humanity.
link |
01:44:30.880
He wants it to be us that control the machines
link |
01:44:35.960
rather than the machines that control us.
link |
01:44:39.400
And why shouldn't we insist on that?
link |
01:44:42.080
We're building them after all, right?
link |
01:44:44.560
Why should we build things that just make us
link |
01:44:46.520
into some little cog in the machinery
link |
01:44:48.440
that has no further say in the matter, right?
link |
01:44:50.240
That's not my idea of an inspiring future either.
link |
01:44:54.560
Yeah, if you think on the cosmic scale
link |
01:44:57.880
in terms of both time and space,
link |
01:45:00.720
so much is put into perspective.
link |
01:45:02.600
Yeah.
link |
01:45:04.220
Whenever I have a bad day, that's what I think about.
link |
01:45:06.440
It immediately makes me feel better.
link |
01:45:09.200
It makes me sad that for us individual humans,
link |
01:45:13.520
at least for now, the ride ends too quickly.
link |
01:45:16.400
That we don't get to experience the cosmic scale.
link |
01:45:20.080
Yeah, I mean, I think of our universe sometimes
link |
01:45:22.280
as an organism that has only begun to wake up a tiny bit,
link |
01:45:26.080
just like the very first little glimmers of consciousness
link |
01:45:30.120
you have in the morning when you start coming around.
link |
01:45:32.120
Before the coffee.
link |
01:45:33.160
Before the coffee, even before you get out of bed,
link |
01:45:35.880
before you even open your eyes.
link |
01:45:37.280
You start to wake up a little bit.
link |
01:45:40.320
There's something here.
link |
01:45:43.440
That's very much how I think of where we are.
link |
01:45:47.120
All those galaxies out there,
link |
01:45:48.600
I think they're really beautiful,
link |
01:45:51.160
but why are they beautiful?
link |
01:45:52.840
They're beautiful because conscious entities
link |
01:45:55.040
are actually observing them,
link |
01:45:57.000
experiencing them through our telescopes.
link |
01:46:01.720
I define consciousness as subjective experience,
link |
01:46:05.880
whether it be colors or emotions or sounds.
link |
01:46:09.420
So beauty is an experience.
link |
01:46:12.340
Meaning is an experience.
link |
01:46:13.800
Purpose is an experience.
link |
01:46:15.880
If there was no conscious experience,
link |
01:46:18.000
observing these galaxies, they wouldn't be beautiful.
link |
01:46:20.320
If we do something dumb with advanced AI in the future here
link |
01:46:24.960
and Earth originating, life goes extinct.
link |
01:46:29.360
And that was it for this.
link |
01:46:30.480
If there is nothing else with telescopes in our universe,
link |
01:46:33.560
then it's kind of game over for beauty
link |
01:46:36.600
and meaning and purpose in our whole universe.
link |
01:46:38.120
And I think that would be just such
link |
01:46:39.880
an opportunity lost, frankly.
link |
01:46:41.800
And I think when Elon points this out,
link |
01:46:46.080
he gets very unfairly maligned in the media
link |
01:46:49.640
for all the dumb media bias reasons we talked about.
link |
01:46:52.440
They want to print precisely the things about Elon
link |
01:46:55.680
out of context that are really click baity.
link |
01:46:58.800
He has gotten so much flack
link |
01:47:00.440
for this summoning the demon statement.
link |
01:47:04.720
I happen to know exactly the context
link |
01:47:07.680
because I was in the front row when he gave that talk.
link |
01:47:09.720
It was at MIT, you'll be pleased to know,
link |
01:47:11.280
it was the AeroAstro anniversary.
link |
01:47:13.880
They had Buzz Aldrin there from the moon landing,
link |
01:47:16.800
a whole house, a Kresge auditorium
link |
01:47:19.000
packed with MIT students.
link |
01:47:20.840
And he had this amazing Q&A, it might've gone for an hour.
link |
01:47:23.920
And they talked about rockets and Mars and everything.
link |
01:47:27.160
At the very end, this one student
link |
01:47:29.600
who has actually hit my class asked him, what about AI?
link |
01:47:33.200
Elon makes this one comment
link |
01:47:35.240
and they take this out of context, print it, goes viral.
link |
01:47:39.440
What is it like with AI,
link |
01:47:40.600
we're summoning the demons, something like that.
link |
01:47:42.920
And try to cast him as some sort of doom and gloom dude.
link |
01:47:47.480
You know Elon, he's not the doom and gloom dude.
link |
01:47:51.960
He is such a positive visionary.
link |
01:47:54.000
And the whole reason he warns about this
link |
01:47:55.680
is because he realizes more than most
link |
01:47:57.720
what the opportunity cost is of screwing up.
link |
01:47:59.880
That there is so much awesomeness in the future
link |
01:48:02.360
that we can and our descendants can enjoy
link |
01:48:05.480
if we don't screw up, right?
link |
01:48:07.760
I get so pissed off when people try to cast him
link |
01:48:10.320
as some sort of technophobic Luddite.
link |
01:48:15.320
And at this point, it's kind of ludicrous
link |
01:48:18.480
when I hear people say that people who worry about
link |
01:48:21.640
artificial general intelligence are Luddites
link |
01:48:24.560
because of course, if you look more closely,
link |
01:48:27.000
you have some of the most outspoken people making warnings
link |
01:48:32.920
are people like Professor Stuart Russell from Berkeley
link |
01:48:35.640
who's written the bestselling AI textbook, you know.
link |
01:48:38.360
So claiming that he's a Luddite who doesn't understand AI
link |
01:48:43.360
is the joke is really on the people who said it.
link |
01:48:46.520
But I think more broadly,
link |
01:48:48.200
this message is really not sunk in at all.
link |
01:48:50.800
What it is that people worry about,
link |
01:48:52.640
they think that Elon and Stuart Russell and others
link |
01:48:56.680
are worried about the dancing robots picking up an AR 15
link |
01:49:02.280
and going on a rampage, right?
link |
01:49:04.360
They think they're worried about robots turning evil.
link |
01:49:08.440
They're not, I'm not.
link |
01:49:10.360
The risk is not malice, it's competence.
link |
01:49:15.880
The risk is just that we build some systems
link |
01:49:17.560
that are incredibly competent,
link |
01:49:18.760
which means they're always gonna get
link |
01:49:20.040
their goals accomplished,
link |
01:49:22.000
even if they clash with our goals.
link |
01:49:24.080
That's the risk.
link |
01:49:25.920
Why did we humans drive the West African black rhino extinct?
link |
01:49:30.920
Is it because we're malicious, evil rhinoceros haters?
link |
01:49:34.840
No, it's just because our goals didn't align
link |
01:49:38.000
with the goals of those rhinos
link |
01:49:39.240
and tough luck for the rhinos, you know.
link |
01:49:42.360
So the point is just we don't wanna put ourselves
link |
01:49:46.720
in the position of those rhinos
link |
01:49:48.120
creating something more powerful than us
link |
01:49:51.240
if we haven't first figured out how to align the goals.
link |
01:49:53.880
And I am optimistic.
link |
01:49:54.920
I think we could do it if we worked really hard on it,
link |
01:49:56.880
because I spent a lot of time
link |
01:49:59.200
around intelligent entities that were more intelligent
link |
01:50:01.800
than me, my mom and my dad.
link |
01:50:05.960
And I was little and that was fine
link |
01:50:07.560
because their goals were actually aligned
link |
01:50:09.160
with mine quite well.
link |
01:50:11.280
But we've seen today many examples of where the goals
link |
01:50:15.440
of our powerful systems are not so aligned.
link |
01:50:17.200
So those click through optimization algorithms
link |
01:50:22.960
that are polarized social media, right?
link |
01:50:24.560
They were actually pretty poorly aligned
link |
01:50:26.160
with what was good for democracy, it turned out.
link |
01:50:28.760
And again, almost all problems we've had
link |
01:50:31.520
in the machine learning again came so far,
link |
01:50:33.640
not from malice, but from poor alignment.
link |
01:50:35.520
And that's exactly why that's why we should be concerned
link |
01:50:38.240
about it in the future.
link |
01:50:39.320
Do you think it's possible that with systems
link |
01:50:43.240
like Neuralink and brain computer interfaces,
link |
01:50:47.320
you know, again, thinking of the cosmic scale,
link |
01:50:49.280
Elon's talked about this, but others have as well
link |
01:50:52.600
throughout history of figuring out how the exact mechanism
link |
01:50:57.240
of how to achieve that kind of alignment.
link |
01:51:00.000
So one of them is having a symbiosis with AI,
link |
01:51:03.160
which is like coming up with clever ways
link |
01:51:05.560
where we're like stuck together in this weird relationship,
link |
01:51:10.360
whether it's biological or in some kind of other way.
link |
01:51:14.200
Do you think that's a possibility
link |
01:51:17.240
of having that kind of symbiosis?
link |
01:51:19.200
Or do we wanna instead kind of focus
link |
01:51:20.960
on this distinct entities of us humans talking
link |
01:51:28.200
to these intelligible, self doubting AIs,
link |
01:51:31.720
maybe like Stuart Russell thinks about it,
link |
01:51:33.600
like we're self doubting and full of uncertainty
link |
01:51:37.640
and our AI systems are full of uncertainty.
link |
01:51:39.760
We communicate back and forth
link |
01:51:41.520
and in that way achieve symbiosis.
link |
01:51:44.680
I honestly don't know.
link |
01:51:46.200
I would say that because we don't know for sure
link |
01:51:48.600
what if any of our, which of any of our ideas will work.
link |
01:51:52.200
But we do know that if we don't,
link |
01:51:55.200
I'm pretty convinced that if we don't get any
link |
01:51:56.880
of these things to work and just barge ahead,
link |
01:51:59.840
then our species is, you know,
link |
01:52:01.440
probably gonna go extinct this century.
link |
01:52:03.720
I think it's...
link |
01:52:04.600
This century, you think like,
link |
01:52:06.320
you think we're facing this crisis
link |
01:52:09.720
is a 21st century crisis.
link |
01:52:11.320
Like this century will be remembered.
link |
01:52:13.520
But on a hard drive and a hard drive somewhere
link |
01:52:18.720
or maybe by future generations is like,
link |
01:52:22.280
like there'll be future Future of Life Institute awards
link |
01:52:26.240
for people that have done something about AI.
link |
01:52:30.640
It could also end even worse,
link |
01:52:31.880
whether we're not superseded
link |
01:52:33.720
by leaving any AI behind either.
link |
01:52:35.280
We just totally wipe out, you know,
link |
01:52:37.040
like on Easter Island.
link |
01:52:38.480
Our century is long.
link |
01:52:39.880
You know, there are still 79 years left of it, right?
link |
01:52:44.280
Think about how far we've come just in the last 30 years.
link |
01:52:47.680
So we can talk more about what might go wrong,
link |
01:52:53.080
but you asked me this really good question
link |
01:52:54.600
about what's the best strategy.
link |
01:52:55.800
Is it Neuralink or Russell's approach or whatever?
link |
01:52:59.800
I think, you know, when we did the Manhattan project,
link |
01:53:05.480
we didn't know if any of our four ideas
link |
01:53:08.480
for enriching uranium and getting out the uranium 235
link |
01:53:11.760
were gonna work.
link |
01:53:12.880
But we felt this was really important
link |
01:53:14.800
to get it before Hitler did.
link |
01:53:16.680
So, you know what we did?
link |
01:53:17.520
We tried all four of them.
link |
01:53:19.520
Here, I think it's analogous
link |
01:53:21.960
where there's the greatest threat
link |
01:53:24.360
that's ever faced our species.
link |
01:53:25.920
And of course, US national security by implication.
link |
01:53:29.240
We don't know if we don't have any method
link |
01:53:31.480
that's guaranteed to work, but we have a lot of ideas.
link |
01:53:34.680
So we should invest pretty heavily
link |
01:53:35.960
in pursuing all of them with an open mind
link |
01:53:38.040
and hope that one of them at least works.
link |
01:53:40.560
These are, the good news is the century is long,
link |
01:53:45.360
and it might take decades
link |
01:53:47.880
until we have artificial general intelligence.
link |
01:53:50.160
So we have some time hopefully,
link |
01:53:52.760
but it takes a long time to solve
link |
01:53:55.240
these very, very difficult problems.
link |
01:53:57.120
It's gonna actually be the,
link |
01:53:58.080
it's the most difficult problem
link |
01:53:59.160
we were ever trying to solve as a species.
link |
01:54:01.320
So we have to start now.
link |
01:54:03.400
So we don't have, rather than begin thinking about it
link |
01:54:05.840
the night before some people who've had too much Red Bull
link |
01:54:08.720
switch it on.
link |
01:54:09.560
And we have to, coming back to your question,
link |
01:54:11.840
we have to pursue all of these different avenues and see.
link |
01:54:14.240
If you were my investment advisor
link |
01:54:16.800
and I was trying to invest in the future,
link |
01:54:19.920
how do you think the human species
link |
01:54:23.040
is most likely to destroy itself in the century?
link |
01:54:29.440
Yeah, so if the crises,
link |
01:54:32.120
many of the crises we're facing are really before us
link |
01:54:34.680
within the next hundred years,
link |
01:54:37.160
how do we make explicit,
link |
01:54:42.320
make known the unknowns and solve those problems
link |
01:54:46.640
to avoid the biggest,
link |
01:54:49.560
starting with the biggest existential crisis?
link |
01:54:51.920
So as your investment advisor,
link |
01:54:53.160
how are you planning to make money on us
link |
01:54:55.680
destroying ourselves?
link |
01:54:56.640
I have to ask.
link |
01:54:57.480
I don't know.
link |
01:54:58.320
It might be the Russian origins.
link |
01:55:01.080
Somehow it's involved.
link |
01:55:02.840
At the micro level of detailed strategies,
link |
01:55:04.760
of course, these are unsolved problems.
link |
01:55:08.640
For AI alignment,
link |
01:55:09.680
we can break it into three sub problems
link |
01:55:12.240
that are all unsolved.
link |
01:55:13.480
I think you want first to make machines
link |
01:55:16.720
understand our goals,
link |
01:55:18.400
then adopt our goals and then retain our goals.
link |
01:55:23.600
So to hit on all three real quickly.
link |
01:55:27.400
The problem when Andreas Lubitz told his autopilot
link |
01:55:31.080
to fly into the Alps was that the computer
link |
01:55:34.320
didn't even understand anything about his goals.
link |
01:55:39.040
It was too dumb.
link |
01:55:40.520
It could have understood actually,
link |
01:55:42.720
but you would have had to put some effort in
link |
01:55:45.280
as a systems designer to don't fly into mountains.
link |
01:55:48.880
So that's the first challenge.
link |
01:55:49.920
How do you program into computers human values,
link |
01:55:54.480
human goals?
link |
01:55:56.240
We can start rather than saying,
link |
01:55:58.280
oh, it's so hard.
link |
01:55:59.120
We should start with the simple stuff, as I said,
link |
01:56:02.400
self driving cars, airplanes,
link |
01:56:04.120
just put in all the goals that we all agree on already,
link |
01:56:07.240
and then have a habit of whenever machines get smarter
link |
01:56:10.560
so they can understand one level higher goals,
link |
01:56:15.480
put them into.
link |
01:56:16.960
The second challenge is getting them to adopt the goals.
link |
01:56:20.840
It's easy for situations like that
link |
01:56:22.320
where you just program it in,
link |
01:56:23.280
but when you have self learning systems like children,
link |
01:56:26.040
you know, any parent knows
link |
01:56:29.320
that there was a difference between getting our kids
link |
01:56:33.440
to understand what we want them to do
link |
01:56:34.840
and to actually adopt our goals, right?
link |
01:56:37.600
With humans, with children, fortunately,
link |
01:56:40.040
they go through this phase.
link |
01:56:44.000
First, they're too dumb to understand
link |
01:56:45.480
what we want our goals are.
link |
01:56:46.800
And then they have this period of some years
link |
01:56:50.360
when they're both smart enough to understand them
link |
01:56:52.080
and malleable enough that we have a chance
link |
01:56:53.520
to raise them well.
link |
01:56:55.400
And then they become teenagers kind of too late.
link |
01:56:59.160
But we have this window with machines,
link |
01:57:01.360
the challenges, the intelligence might grow so fast
link |
01:57:04.120
that that window is pretty short.
link |
01:57:06.800
So that's a research problem.
link |
01:57:08.480
The third one is how do you make sure they keep the goals
link |
01:57:11.320
if they keep learning more and getting smarter?
link |
01:57:14.520
Many sci fi movies are about how you have something
link |
01:57:17.360
in which initially was aligned,
link |
01:57:18.520
but then things kind of go off keel.
link |
01:57:20.320
And, you know, my kids were very, very excited
link |
01:57:24.680
about their Legos when they were little.
link |
01:57:27.360
Now they're just gathering dust in the basement.
link |
01:57:29.800
If we create machines that are really on board
link |
01:57:32.560
with the goal of taking care of humanity,
link |
01:57:34.320
we don't want them to get as bored with us
link |
01:57:36.080
as my kids got with Legos.
link |
01:57:39.480
So this is another research challenge.
link |
01:57:41.920
How can you make some sort of recursively
link |
01:57:43.400
self improving system retain certain basic goals?
link |
01:57:47.440
That said, a lot of adult people still play with Legos.
link |
01:57:50.880
So maybe we succeeded with the Legos.
link |
01:57:52.720
Maybe, I like your optimism.
link |
01:57:55.320
But above all.
link |
01:57:56.160
So not all AI systems have to maintain the goals, right?
link |
01:57:59.120
Just some fraction.
link |
01:58:00.200
Yeah, so there's a lot of talented AI researchers now
link |
01:58:04.920
who have heard of this and want to work on it.
link |
01:58:07.280
Not so much funding for it yet.
link |
01:58:10.960
Of the billions that go into building AI more powerful,
link |
01:58:14.800
it's only a minuscule fraction
link |
01:58:16.240
so far going into this safety research.
link |
01:58:18.280
My attitude is generally we should not try to slow down
link |
01:58:20.880
the technology, but we should greatly accelerate
link |
01:58:22.840
the investment in this sort of safety research.
link |
01:58:25.880
And also, this was very embarrassing last year,
link |
01:58:29.320
but the NSF decided to give out
link |
01:58:31.840
six of these big institutes.
link |
01:58:33.680
We got one of them for AI and science, you asked me about.
link |
01:58:37.040
Another one was supposed to be for AI safety research.
link |
01:58:40.720
And they gave it to people studying oceans
link |
01:58:43.520
and climate and stuff.
link |
01:58:46.920
So I'm all for studying oceans and climates,
link |
01:58:49.320
but we need to actually have some money
link |
01:58:51.120
that actually goes into AI safety research also
link |
01:58:53.360
and doesn't just get grabbed by whatever.
link |
01:58:56.400
That's a fantastic investment.
link |
01:58:57.960
And then at the higher level, you asked this question,
link |
01:59:00.480
okay, what can we do?
link |
01:59:02.680
What are the biggest risks?
link |
01:59:05.240
I think we cannot just consider this
link |
01:59:08.760
to be only a technical problem.
link |
01:59:11.000
Again, because if you solve only the technical problem,
link |
01:59:13.640
can I play with your robot?
link |
01:59:14.680
Yes, please.
link |
01:59:15.520
If we can get our machines to just blindly obey
link |
01:59:20.560
the orders we give them,
link |
01:59:22.760
so we can always trust that it will do what we want.
link |
01:59:26.160
That might be great for the owner of the robot.
link |
01:59:28.440
That might not be so great for the rest of humanity
link |
01:59:31.400
if that person is that least favorite world leader
link |
01:59:34.080
or whatever you imagine, right?
link |
01:59:36.600
So we have to also take a look at the,
link |
01:59:39.200
apply alignment, not just to machines,
link |
01:59:41.960
but to all the other powerful structures.
link |
01:59:44.560
That's why it's so important
link |
01:59:45.720
to strengthen our democracy again,
link |
01:59:47.040
as I said, to have institutions,
link |
01:59:48.520
make sure that the playing field is not rigged
link |
01:59:51.440
so that corporations are given the right incentives
link |
01:59:54.800
to do the things that both make profit
link |
01:59:57.240
and are good for people,
link |
01:59:58.880
to make sure that countries have incentives
link |
02:00:00.920
to do things that are both good for their people
link |
02:00:03.320
and don't screw up the rest of the world.
link |
02:00:06.840
And this is not just something for AI nerds to geek out on.
link |
02:00:10.280
This is an interesting challenge for political scientists,
link |
02:00:13.080
economists, and so many other thinkers.
link |
02:00:16.800
So one of the magical things
link |
02:00:18.680
that perhaps makes this earth quite unique
link |
02:00:25.240
is that it's home to conscious beings.
link |
02:00:28.840
So you mentioned consciousness.
link |
02:00:31.640
Perhaps as a small aside,
link |
02:00:35.000
because we didn't really get specific
link |
02:00:36.720
to how we might do the alignment.
link |
02:00:39.440
Like you said,
link |
02:00:40.280
is there just a really important research problem,
link |
02:00:41.840
but do you think engineering consciousness
link |
02:00:44.720
into AI systems is a possibility,
link |
02:00:49.880
is something that we might one day do,
link |
02:00:53.040
or is there something fundamental to consciousness
link |
02:00:56.800
that is, is there something about consciousness
link |
02:00:59.880
that is fundamental to humans and humans only?
link |
02:01:03.400
I think it's possible.
link |
02:01:04.640
I think both consciousness and intelligence
link |
02:01:08.320
are information processing.
link |
02:01:10.760
Certain types of information processing.
link |
02:01:13.480
And that fundamentally,
link |
02:01:15.160
it doesn't matter whether the information is processed
link |
02:01:17.320
by carbon atoms in the neurons and brains
link |
02:01:21.280
or by silicon atoms and so on in our technology.
link |
02:01:27.280
Some people disagree.
link |
02:01:28.280
This is what I think as a physicist.
link |
02:01:32.960
That consciousness is the same kind of,
link |
02:01:34.960
you said consciousness is information processing.
link |
02:01:37.720
So meaning, I think you had a quote of something like
link |
02:01:43.000
it's information knowing itself, that kind of thing.
link |
02:01:47.760
I think consciousness is, yeah,
link |
02:01:49.280
is the way information feels when it's being processed.
link |
02:01:51.960
One's being put in complex ways.
link |
02:01:53.520
We don't know exactly what those complex ways are.
link |
02:01:56.120
It's clear that most of the information processing
link |
02:01:59.240
in our brains does not create an experience.
link |
02:02:01.720
We're not even aware of it, right?
link |
02:02:03.600
Like for example,
link |
02:02:05.520
you're not aware of your heartbeat regulation right now,
link |
02:02:07.880
even though it's clearly being done by your body, right?
link |
02:02:10.600
It's just kind of doing its own thing.
link |
02:02:12.120
When you go jogging,
link |
02:02:13.680
there's a lot of complicated stuff
link |
02:02:15.280
about how you put your foot down and we know it's hard.
link |
02:02:18.720
That's why robots used to fall over so much,
link |
02:02:20.560
but you're mostly unaware about it.
link |
02:02:22.720
Your brain, your CEO consciousness module
link |
02:02:25.760
just sends an email,
link |
02:02:26.600
hey, I'm gonna keep jogging along this path.
link |
02:02:29.160
The rest is on autopilot, right?
link |
02:02:31.560
So most of it is not conscious,
link |
02:02:33.200
but somehow there is some of the information processing,
link |
02:02:36.640
which is we don't know what exactly.
link |
02:02:41.680
I think this is a science problem
link |
02:02:44.120
that I hope one day we'll have some equation for
link |
02:02:47.680
or something so we can be able to build
link |
02:02:49.080
a consciousness detector and say, yeah,
link |
02:02:51.040
here there is some consciousness, here there's not.
link |
02:02:53.920
Oh, don't boil that lobster because it's feeling pain
link |
02:02:56.640
or it's okay because it's not feeling pain.
link |
02:02:59.880
Right now we treat this as sort of just metaphysics,
link |
02:03:03.440
but it would be very useful in emergency rooms
link |
02:03:06.920
to know if a patient has locked in syndrome
link |
02:03:09.760
and is conscious or if they are actually just out.
link |
02:03:14.560
And in the future, if you build a very, very intelligent
link |
02:03:17.720
helper robot to take care of you,
link |
02:03:20.120
I think you'd like to know
link |
02:03:21.480
if you should feel guilty about shutting it down
link |
02:03:24.120
or if it's just like a zombie going through the motions
link |
02:03:27.080
like a fancy tape recorder, right?
link |
02:03:29.720
And once we can make progress
link |
02:03:32.800
on the science of consciousness
link |
02:03:34.040
and figure out what is conscious and what isn't,
link |
02:03:38.320
then assuming we want to create positive experiences
link |
02:03:45.960
and not suffering, we'll probably choose to build
link |
02:03:48.880
some machines that are deliberately unconscious
link |
02:03:51.760
that do incredibly boring, repetitive jobs
link |
02:03:56.760
in an iron mine somewhere or whatever.
link |
02:03:59.680
And maybe we'll choose to create helper robots
link |
02:04:03.120
for the elderly that are conscious
link |
02:04:05.360
so that people don't just feel creeped out
link |
02:04:07.080
that the robot is just faking it
link |
02:04:10.160
when it acts like it's sad or happy.
link |
02:04:12.160
Like you said, elderly,
link |
02:04:13.440
I think everybody gets pretty deeply lonely in this world.
link |
02:04:16.920
And so there's a place I think for everybody
link |
02:04:19.640
to have a connection with conscious beings,
link |
02:04:21.640
whether they're human or otherwise.
link |
02:04:24.400
But I know for sure that I would,
link |
02:04:26.920
if I had a robot, if I was gonna develop any kind
link |
02:04:29.960
of personal emotional connection with it,
link |
02:04:32.760
I would be very creeped out
link |
02:04:33.840
if I knew it in an intellectual level
link |
02:04:35.280
that the whole thing was just a fraud.
link |
02:04:36.840
Now today you can buy a little talking doll for a kid
link |
02:04:43.000
which will say things and the little child will often think
link |
02:04:46.120
that this is actually conscious
link |
02:04:47.840
and even real secrets to it that then go on the internet
link |
02:04:50.440
and with lots of the creepy repercussions.
link |
02:04:52.520
I would not wanna be just hacked and tricked like this.
link |
02:04:58.040
If I was gonna be developing real emotional connections
link |
02:05:01.560
with the robot, I would wanna know
link |
02:05:04.200
that this is actually real.
link |
02:05:05.440
It's acting conscious, acting happy
link |
02:05:08.080
because it actually feels it.
link |
02:05:09.880
And I think this is not sci fi.
link |
02:05:11.400
I think it's possible to measure, to come up with tools.
link |
02:05:15.560
After we understand the science of consciousness,
link |
02:05:17.560
you're saying we'll be able to come up with tools
link |
02:05:19.760
that can measure consciousness
link |
02:05:21.400
and definitively say like this thing is experiencing
link |
02:05:25.120
the things it says it's experiencing.
link |
02:05:27.360
Kind of by definition.
link |
02:05:28.320
If it is a physical phenomenon, information processing
link |
02:05:31.560
and we know that some information processing is conscious
link |
02:05:34.040
and some isn't, well, then there is something there
link |
02:05:36.000
to be discovered with the methods of science.
link |
02:05:38.040
Giulio Tononi has stuck his neck out the farthest
link |
02:05:41.120
and written down some equations for a theory.
link |
02:05:43.640
Maybe that's right, maybe it's wrong.
link |
02:05:45.680
We certainly don't know.
link |
02:05:46.920
But I applaud that kind of efforts to sort of take this,
link |
02:05:50.760
say this is not just something that philosophers
link |
02:05:53.960
can have beer and muse about,
link |
02:05:56.320
but something we can measure and study.
link |
02:05:58.720
And coming, bringing that back to us,
link |
02:06:00.560
I think what we would probably choose to do, as I said,
link |
02:06:03.000
is if we cannot figure this out,
link |
02:06:05.680
choose to make, to be quite mindful
link |
02:06:09.000
about what sort of consciousness, if any,
link |
02:06:11.280
we put in different machines that we have.
link |
02:06:16.080
And certainly, we wouldn't wanna make,
link |
02:06:19.000
we should not be making much machines that suffer
link |
02:06:21.760
without us even knowing it, right?
link |
02:06:23.640
And if at any point someone decides to upload themselves
link |
02:06:28.320
like Ray Kurzweil wants to do,
link |
02:06:30.120
I don't know if you've had him on your show.
link |
02:06:31.440
We agree, but then COVID happens,
link |
02:06:33.040
so we're waiting it out a little bit.
link |
02:06:34.680
Suppose he uploads himself into this robo Ray
link |
02:06:38.520
and it talks like him and acts like him and laughs like him.
link |
02:06:42.200
And before he powers off his biological body,
link |
02:06:46.480
he would probably be pretty disturbed
link |
02:06:47.760
if he realized that there's no one home.
link |
02:06:49.600
This robot is not having any subjective experience, right?
link |
02:06:53.760
If humanity gets replaced by machine descendants,
link |
02:06:59.840
which do all these cool things and build spaceships
link |
02:07:02.320
and go to intergalactic rock concerts,
link |
02:07:05.640
and it turns out that they are all unconscious,
link |
02:07:10.000
just going through the motions,
link |
02:07:11.440
wouldn't that be like the ultimate zombie apocalypse, right?
link |
02:07:16.160
Just a play for empty benches?
link |
02:07:18.040
Yeah, I have a sense that there's some kind of,
link |
02:07:21.200
once we understand consciousness better,
link |
02:07:22.800
we'll understand that there's some kind of continuum
link |
02:07:25.640
and it would be a greater appreciation.
link |
02:07:28.000
And we'll probably understand, just like you said,
link |
02:07:30.440
it'd be unfortunate if it's a trick.
link |
02:07:32.400
We'll probably definitely understand
link |
02:07:33.920
that love is indeed a trick that we'll play on each other,
link |
02:07:37.760
that we humans are, we convince ourselves we're conscious,
link |
02:07:40.960
but we're really, us and trees and dolphins
link |
02:07:45.240
are all the same kind of consciousness.
link |
02:07:46.600
Can I try to cheer you up a little bit
link |
02:07:48.160
with a philosophical thought here about the love part?
link |
02:07:50.280
Yes, let's do it.
link |
02:07:51.360
You know, you might say,
link |
02:07:53.920
okay, yeah, love is just a collaboration enabler.
link |
02:07:58.120
And then maybe you can go and get depressed about that.
link |
02:08:01.800
But I think that would be the wrong conclusion, actually.
link |
02:08:04.640
You know, I know that the only reason I enjoy food
link |
02:08:08.640
is because my genes hacked me
link |
02:08:11.000
and they don't want me to starve to death.
link |
02:08:13.720
Not because they care about me consciously
link |
02:08:17.280
enjoying succulent delights of pistachio ice cream,
link |
02:08:21.080
but they just want me to make copies of them.
link |
02:08:23.360
The whole thing, so in a sense,
link |
02:08:24.520
the whole enjoyment of food is also a scam like this.
link |
02:08:28.960
But does that mean I shouldn't take pleasure
link |
02:08:31.280
in this pistachio ice cream?
link |
02:08:32.560
I love pistachio ice cream.
link |
02:08:34.040
And I can tell you, I know this is an experimental fact.
link |
02:08:38.200
I enjoy pistachio ice cream every bit as much,
link |
02:08:41.600
even though I scientifically know exactly why,
link |
02:08:45.560
what kind of scam this was.
link |
02:08:46.880
Your genes really appreciate
link |
02:08:48.640
that you like the pistachio ice cream.
link |
02:08:50.440
Well, but I, my mind appreciates it too, you know?
link |
02:08:53.080
And I have a conscious experience right now.
link |
02:08:55.800
Ultimately, all of my brain is also just something
link |
02:08:58.640
the genes built to copy themselves.
link |
02:09:00.440
But so what?
link |
02:09:01.600
You know, I'm grateful that,
link |
02:09:03.200
yeah, thanks genes for doing this,
link |
02:09:04.960
but you know, now it's my brain that's in charge here
link |
02:09:07.600
and I'm gonna enjoy my conscious experience,
link |
02:09:09.520
thank you very much.
link |
02:09:10.360
And not just the pistachio ice cream,
link |
02:09:12.480
but also the love I feel for my amazing wife
link |
02:09:15.440
and all the other delights of being conscious.
link |
02:09:19.280
I don't, actually Richard Feynman,
link |
02:09:22.240
I think said this so well.
link |
02:09:25.080
He is also the guy, you know, really got me into physics.
link |
02:09:29.680
Some art friend said that,
link |
02:09:31.240
oh, science kind of just is the party pooper.
link |
02:09:34.520
It's kind of ruins the fun, right?
link |
02:09:36.240
When like you have a beautiful flowers as the artist
link |
02:09:39.680
and then the scientist is gonna deconstruct that
link |
02:09:41.600
into just a blob of quarks and electrons.
link |
02:09:44.160
And Feynman pushed back on that in such a beautiful way,
link |
02:09:47.480
which I think also can be used to push back
link |
02:09:49.920
and make you not feel guilty about falling in love.
link |
02:09:53.440
So here's what Feynman basically said.
link |
02:09:55.000
He said to his friend, you know,
link |
02:09:56.920
yeah, I can also as a scientist see
link |
02:09:59.080
that this is a beautiful flower, thank you very much.
link |
02:10:00.960
Maybe I can't draw as good a painting as you
link |
02:10:03.280
because I'm not as talented an artist,
link |
02:10:04.560
but yeah, I can really see the beauty in it.
link |
02:10:06.800
And it just, it also looks beautiful to me.
link |
02:10:09.360
But in addition to that, Feynman said, as a scientist,
link |
02:10:12.200
I see even more beauty that the artist did not see, right?
link |
02:10:16.960
Suppose this is a flower on a blossoming apple tree.
link |
02:10:21.120
You could say this tree has more beauty in it
link |
02:10:23.840
than just the colors and the fragrance.
link |
02:10:26.400
This tree is made of air, Feynman wrote.
link |
02:10:29.040
This is one of my favorite Feynman quotes ever.
link |
02:10:31.240
And it took the carbon out of the air
link |
02:10:33.760
and bound it in using the flaming heat of the sun,
link |
02:10:36.160
you know, to turn the air into a tree.
link |
02:10:38.600
And when you burn logs in your fireplace,
link |
02:10:42.760
it's really beautiful to think that this is being reversed.
link |
02:10:45.120
Now the tree is going, the wood is going back into air.
link |
02:10:48.600
And in this flaming, beautiful dance of the fire
link |
02:10:52.520
that the artist can see is the flaming light of the sun
link |
02:10:56.000
that was bound in to turn the air into tree.
link |
02:10:59.120
And then the ashes is the little residue
link |
02:11:01.480
that didn't come from the air
link |
02:11:02.560
that the tree sucked out of the ground, you know.
link |
02:11:04.280
Feynman said, these are beautiful things.
link |
02:11:06.160
And science just adds, it doesn't subtract.
link |
02:11:10.040
And I feel exactly that way about love
link |
02:11:12.760
and about pistachio ice cream also.
link |
02:11:16.000
I can understand that there is even more nuance
link |
02:11:18.680
to the whole thing, right?
link |
02:11:20.480
At this very visceral level,
link |
02:11:22.480
you can fall in love just as much as someone
link |
02:11:24.560
who knows nothing about neuroscience.
link |
02:11:27.680
But you can also appreciate this even greater beauty in it.
link |
02:11:31.840
Just like, isn't it remarkable that it came about
link |
02:11:35.600
from this completely lifeless universe,
link |
02:11:38.560
just a bunch of hot blob of plasma expanding.
link |
02:11:43.080
And then over the eons, you know, gradually,
link |
02:11:46.160
first the strong nuclear force decided
link |
02:11:48.440
to combine quarks together into nuclei.
link |
02:11:50.920
And then the electric force bound in electrons
link |
02:11:53.040
and made atoms.
link |
02:11:53.880
And then they clustered from gravity
link |
02:11:55.240
and you got planets and stars and this and that.
link |
02:11:57.720
And then natural selection came along
link |
02:12:00.040
and the genes had their little thing.
link |
02:12:01.800
And you started getting what went from seeming
link |
02:12:04.640
like a completely pointless universe
link |
02:12:06.240
that we're just trying to increase entropy
link |
02:12:08.040
and approach heat death into something
link |
02:12:10.160
that looked more goal oriented.
link |
02:12:11.720
Isn't that kind of beautiful?
link |
02:12:13.280
And then this goal orientedness through evolution
link |
02:12:15.760
got ever more sophisticated where you got ever more.
link |
02:12:18.720
And then you started getting this thing,
link |
02:12:20.120
which is kind of like DeepMind's mu zero and steroids,
link |
02:12:25.280
the ultimate self play is not what DeepMind's AI
link |
02:12:29.400
does against itself to get better at go.
link |
02:12:32.080
It's what all these little quark blobs did
link |
02:12:34.440
against each other in the game of survival of the fittest.
link |
02:12:38.920
Now, when you had really dumb bacteria
link |
02:12:42.000
living in a simple environment,
link |
02:12:44.040
there wasn't much incentive to get intelligent,
link |
02:12:46.440
but then the life made environment more complex.
link |
02:12:50.880
And then there was more incentive to get even smarter.
link |
02:12:53.520
And that gave the other organisms more of incentive
link |
02:12:56.600
to also get smarter.
link |
02:12:57.520
And then here we are now,
link |
02:12:59.880
just like mu zero learned to become world master at go
link |
02:13:05.040
and chess from playing against itself
link |
02:13:07.200
by just playing against itself.
link |
02:13:08.560
All the quirks here on our planet,
link |
02:13:10.680
the electrons have created giraffes and elephants
link |
02:13:15.000
and humans and love.
link |
02:13:17.640
I just find that really beautiful.
link |
02:13:20.280
And to me, that just adds to the enjoyment of love.
link |
02:13:24.200
It doesn't subtract anything.
link |
02:13:25.640
Do you feel a little more careful now?
link |
02:13:27.320
I feel way better, that was incredible.
link |
02:13:30.640
So this self play of quirks,
link |
02:13:33.920
taking back to the beginning of our conversation
link |
02:13:36.320
a little bit, there's so many exciting possibilities
link |
02:13:39.520
about artificial intelligence understanding
link |
02:13:42.040
the basic laws of physics.
link |
02:13:44.240
Do you think AI will help us unlock?
link |
02:13:47.400
There's been quite a bit of excitement
link |
02:13:49.240
throughout the history of physics
link |
02:13:50.440
of coming up with more and more general simple laws
link |
02:13:55.440
that explain the nature of our reality.
link |
02:13:58.400
And then the ultimate of that would be a theory
link |
02:14:01.120
of everything that combines everything together.
link |
02:14:03.680
Do you think it's possible that one, we humans,
link |
02:14:07.440
but perhaps AI systems will figure out a theory of physics
link |
02:14:13.640
that unifies all the laws of physics?
link |
02:14:17.120
Yeah, I think it's absolutely possible.
link |
02:14:19.920
I think it's very clear
link |
02:14:21.360
that we're gonna see a great boost to science.
link |
02:14:24.960
We're already seeing a boost actually
link |
02:14:26.720
from machine learning helping science.
link |
02:14:28.760
Alpha fold was an example,
link |
02:14:30.280
the decades old protein folding problem.
link |
02:14:34.440
So, and gradually, yeah, unless we go extinct
link |
02:14:38.160
by doing something dumb like we discussed,
link |
02:14:39.720
I think it's very likely
link |
02:14:44.040
that our understanding of physics will become so good
link |
02:14:48.040
that our technology will no longer be limited
link |
02:14:53.040
by human intelligence,
link |
02:14:55.200
but instead be limited by the laws of physics.
link |
02:14:58.240
So our tech today is limited
link |
02:15:00.120
by what we've been able to invent, right?
link |
02:15:02.120
I think as AI progresses,
link |
02:15:04.920
it'll just be limited by the speed of light
link |
02:15:07.200
and other physical limits,
link |
02:15:09.240
which would mean it's gonna be just dramatically beyond
link |
02:15:13.960
where we are now.
link |
02:15:15.280
Do you think it's a fundamentally mathematical pursuit
link |
02:15:18.560
of trying to understand like the laws
link |
02:15:22.120
of our universe from a mathematical perspective?
link |
02:15:25.760
So almost like if it's AI,
link |
02:15:28.000
it's exploring the space of like theorems
link |
02:15:31.640
and those kinds of things,
link |
02:15:33.480
or is there some other more computational ideas,
link |
02:15:39.760
more sort of empirical ideas?
link |
02:15:41.280
They're both, I would say.
link |
02:15:43.120
It's really interesting to look out at the landscape
link |
02:15:45.920
of everything we call science today.
link |
02:15:48.000
So here you come now with this big new hammer.
link |
02:15:50.200
It says machine learning on it
link |
02:15:51.480
and that's, you know, where are there some nails
link |
02:15:53.360
that you can help with here that you can hammer?
link |
02:15:56.600
Ultimately, if machine learning gets the point
link |
02:16:00.120
that it can do everything better than us,
link |
02:16:02.800
it will be able to help across the whole space of science.
link |
02:16:06.000
But maybe we can anchor it by starting a little bit
link |
02:16:08.120
right now near term and see how we kind of move forward.
link |
02:16:11.640
So like right now, first of all,
link |
02:16:14.840
you have a lot of big data science, right?
link |
02:16:17.360
Where, for example, with telescopes,
link |
02:16:19.360
we are able to collect way more data every hour
link |
02:16:24.120
than a grad student can just pour over
link |
02:16:26.720
like in the old times, right?
link |
02:16:28.760
And machine learning is already being used very effectively,
link |
02:16:31.040
even at MIT, to find planets around other stars,
link |
02:16:34.680
to detect exciting new signatures
link |
02:16:36.560
of new particle physics in the sky,
link |
02:16:38.760
to detect the ripples in the fabric of space time
link |
02:16:42.960
that we call gravitational waves
link |
02:16:44.640
caused by enormous black holes
link |
02:16:46.520
crashing into each other halfway
link |
02:16:48.120
across the observable universe.
link |
02:16:49.920
Machine learning is running and ticking right now,
link |
02:16:52.680
doing all these things,
link |
02:16:53.800
and it's really helping all these experimental fields.
link |
02:16:58.440
There is a separate front of physics,
link |
02:17:01.880
computational physics,
link |
02:17:03.240
which is getting an enormous boost also.
link |
02:17:05.680
So we had to do all our computations by hand, right?
link |
02:17:09.520
People would have these giant books
link |
02:17:11.240
with tables of logarithms,
link |
02:17:12.800
and oh my God, it pains me to even think
link |
02:17:16.720
how long time it would have taken to do simple stuff.
link |
02:17:19.880
Then we started to get little calculators and computers
link |
02:17:23.560
that could do some basic math for us.
link |
02:17:26.520
Now, what we're starting to see is
link |
02:17:31.160
kind of a shift from GOFI, computational physics,
link |
02:17:35.600
to neural network, computational physics.
link |
02:17:40.000
What I mean by that is most computational physics
link |
02:17:44.520
would be done by humans programming in
link |
02:17:48.480
the intelligence of how to do the computation
link |
02:17:50.200
into the computer.
link |
02:17:52.440
Just as when Garry Kasparov got his posterior kicked
link |
02:17:55.400
by IBM's Deep Blue in chess,
link |
02:17:56.920
humans had programmed in exactly how to play chess.
link |
02:17:59.880
Intelligence came from the humans.
link |
02:18:01.160
It wasn't learned, right?
link |
02:18:03.840
Mu zero can be not only Kasparov in chess,
link |
02:18:08.480
but also Stockfish,
link |
02:18:09.880
which is the best sort of GOFI chess program.
link |
02:18:12.560
By learning, and we're seeing more of that now,
link |
02:18:16.560
that shift beginning to happen in physics.
link |
02:18:18.320
So let me give you an example.
link |
02:18:20.520
So lattice QCD is an area of physics
link |
02:18:24.120
whose goal is basically to take the periodic table
link |
02:18:27.320
and just compute the whole thing from first principles.
link |
02:18:31.120
This is not the search for theory of everything.
link |
02:18:33.920
We already know the theory
link |
02:18:36.360
that's supposed to produce as output the periodic table,
link |
02:18:39.720
which atoms are stable, how heavy they are,
link |
02:18:42.720
all that good stuff, their spectral lines.
link |
02:18:45.840
It's a theory, lattice QCD,
link |
02:18:48.120
you can put it on your tshirt.
link |
02:18:50.000
Our colleague Frank Wilczek
link |
02:18:51.160
got the Nobel Prize for working on it.
link |
02:18:54.520
But the math is just too hard for us to solve.
link |
02:18:56.600
We have not been able to start with these equations
link |
02:18:58.640
and solve them to the extent that we can predict, oh yeah.
link |
02:19:01.440
And then there is carbon,
link |
02:19:03.360
and this is what the spectrum of the carbon atom looks like.
link |
02:19:07.000
But awesome people are building
link |
02:19:09.960
these supercomputer simulations
link |
02:19:12.040
where you just put in these equations
link |
02:19:14.960
and you make a big cubic lattice of space,
link |
02:19:20.680
or actually it's a very small lattice
link |
02:19:22.080
because you're going down to the subatomic scale,
link |
02:19:25.640
and you try to solve it.
link |
02:19:26.880
But it's just so computationally expensive
link |
02:19:28.960
that we still haven't been able to calculate things
link |
02:19:31.840
as accurately as we measure them in many cases.
link |
02:19:34.960
And now machine learning is really revolutionizing this.
link |
02:19:37.520
So my colleague Fiala Shanahan at MIT, for example,
link |
02:19:40.040
she's been using this really cool
link |
02:19:43.280
machine learning technique called normalizing flows,
link |
02:19:47.560
where she's realized she can actually speed up
link |
02:19:49.800
the calculation dramatically
link |
02:19:52.160
by having the AI learn how to do things faster.
link |
02:19:55.680
Another area like this
link |
02:19:57.280
where we suck up an enormous amount of supercomputer time
link |
02:20:02.280
to do physics is black hole collisions.
link |
02:20:05.480
So now that we've done the sexy stuff
link |
02:20:06.880
of detecting a bunch of this with LIGO and other experiments,
link |
02:20:09.960
we want to be able to know what we're seeing.
link |
02:20:13.360
And so it's a very simple conceptual problem.
link |
02:20:16.480
It's the two body problem.
link |
02:20:19.000
Newton solved it for classical gravity hundreds of years ago,
link |
02:20:23.000
but the two body problem is still not fully solved.
link |
02:20:26.080
For black holes.
link |
02:20:26.920
Black holes, yes, and Einstein's gravity
link |
02:20:29.000
because they won't just orbit in space,
link |
02:20:31.120
they won't just orbit each other forever anymore,
link |
02:20:33.560
two things, they give off gravitational waves
link |
02:20:36.080
and make sure they crash into each other.
link |
02:20:37.800
And the game, what you want to do is you want to figure out,
link |
02:20:40.320
okay, what kind of wave comes out
link |
02:20:43.480
as a function of the masses of the two black holes,
link |
02:20:46.320
as a function of how they're spinning,
link |
02:20:48.120
relative to each other, et cetera.
link |
02:20:50.720
And that is so hard.
link |
02:20:52.040
It can take months of supercomputer time
link |
02:20:54.200
and massive numbers of cores to do it.
link |
02:20:56.200
Now, wouldn't it be great if you can use machine learning
link |
02:21:01.240
to greatly speed that up, right?
link |
02:21:04.760
Now you can use the expensive old GoFi calculation
link |
02:21:09.360
as the truth, and then see if machine learning
link |
02:21:11.920
can figure out a smarter, faster way
link |
02:21:13.600
of getting the right answer.
link |
02:21:16.320
Yet another area, like computational physics.
link |
02:21:20.000
These are probably the big three
link |
02:21:22.280
that suck up the most computer time.
link |
02:21:24.240
Lattice QCD, black hole collisions,
link |
02:21:27.160
and cosmological simulations,
link |
02:21:29.560
where you take not a subatomic thing
link |
02:21:32.280
and try to figure out the mass of the proton,
link |
02:21:34.400
but you take something enormous
link |
02:21:37.680
and try to look at how all the galaxies get formed in there.
link |
02:21:41.320
There again, there are a lot of very cool ideas right now
link |
02:21:44.720
about how you can use machine learning
link |
02:21:46.080
to do this sort of stuff better.
link |
02:21:49.760
The difference between this and the big data
link |
02:21:51.560
is you kind of make the data yourself, right?
link |
02:21:54.560
So, and then finally,
link |
02:21:58.440
we're looking over the physics landscape
link |
02:22:00.200
and seeing what can we hammer with machine learning, right?
link |
02:22:02.120
So we talked about experimental data, big data,
link |
02:22:05.520
discovering cool stuff that we humans
link |
02:22:07.880
then look more closely at.
link |
02:22:09.520
Then we talked about taking the expensive computations
link |
02:22:13.440
we're doing now and figuring out
link |
02:22:15.040
how to do them much faster and better with AI.
link |
02:22:18.560
And finally, let's go really theoretical.
link |
02:22:21.920
So things like discovering equations,
link |
02:22:25.000
having deep fundamental insights,
link |
02:22:30.240
this is something closest to what I've been doing
link |
02:22:33.040
in my group.
link |
02:22:33.880
We talked earlier about the whole AI Feynman project,
link |
02:22:35.920
where if you just have some data,
link |
02:22:37.920
how do you automatically discover equations
link |
02:22:39.840
that seem to describe this well,
link |
02:22:42.160
that you can then go back as a human
link |
02:22:44.120
and then work with and test and explore.
link |
02:22:46.640
And you asked a really good question also
link |
02:22:50.320
about if this is sort of a search problem in some sense.
link |
02:22:54.000
That's very deep actually what you said, because it is.
link |
02:22:56.880
Suppose I ask you to prove some mathematical theorem.
link |
02:23:01.680
What is a proof in math?
link |
02:23:02.960
It's just a long string of steps, logical steps
link |
02:23:05.360
that you can write out with symbols.
link |
02:23:07.920
And once you find it, it's very easy to write a program
link |
02:23:10.240
to check whether it's a valid proof or not.
link |
02:23:14.640
So why is it so hard to prove it?
link |
02:23:16.080
Well, because there are ridiculously many possible
link |
02:23:19.040
candidate proofs you could write down, right?
link |
02:23:21.600
If the proof contains 10,000 symbols,
link |
02:23:25.440
even if there were only 10 options
link |
02:23:27.760
for what each symbol could be,
link |
02:23:29.120
that's 10 to the power of 1,000 possible proofs,
link |
02:23:33.440
which is way more than there are atoms in our universe.
link |
02:23:36.080
So you could say it's trivial to prove these things.
link |
02:23:38.400
You just write a computer, generate all strings,
link |
02:23:41.200
and then check, is this a valid proof?
link |
02:23:43.680
No.
link |
02:23:44.520
Is this a valid proof?
link |
02:23:45.360
Is this a valid proof?
link |
02:23:46.400
No.
link |
02:23:47.720
And then you just keep doing this forever.
link |
02:23:51.960
But there are a lot of,
link |
02:23:53.160
but it is fundamentally a search problem.
link |
02:23:55.120
You just want to search the space of all those,
link |
02:23:57.000
all strings of symbols to find one that is the proof, right?
link |
02:24:03.880
And there's a whole area of machine learning called search.
link |
02:24:08.800
How do you search through some giant space
link |
02:24:10.600
to find the needle in the haystack?
link |
02:24:12.400
And it's easier in cases
link |
02:24:14.800
where there's a clear measure of good,
link |
02:24:17.160
like you're not just right or wrong,
link |
02:24:18.800
but this is better and this is worse,
link |
02:24:20.640
so you can maybe get some hints
link |
02:24:21.800
as to which direction to go in.
link |
02:24:23.800
That's why we talked about neural networks work so well.
link |
02:24:28.400
I mean, that's such a human thing
link |
02:24:30.680
of that moment of genius
link |
02:24:32.280
of figuring out the intuition of good, essentially.
link |
02:24:37.360
I mean, we thought that that was...
link |
02:24:38.680
Or is it?
link |
02:24:40.120
Maybe it's not, right?
link |
02:24:41.320
We thought that about chess, right?
link |
02:24:42.720
That the ability to see like 10, 15,
link |
02:24:46.880
sometimes 20 steps ahead was not a calculation
link |
02:24:50.680
that humans were performing.
link |
02:24:51.760
It was some kind of weird intuition
link |
02:24:53.720
about different patterns, about board positions,
link |
02:24:57.280
about the relative positions,
link |
02:24:59.440
somehow stitching stuff together.
link |
02:25:01.640
And a lot of it is just like intuition,
link |
02:25:03.920
but then you have like alpha,
link |
02:25:05.960
I guess zero be the first one that did the self play.
link |
02:25:10.400
It just came up with this.
link |
02:25:12.160
It was able to learn through self play mechanism,
link |
02:25:14.560
this kind of intuition.
link |
02:25:16.040
Exactly.
link |
02:25:16.880
But just like you said, it's so fascinating to think,
link |
02:25:19.960
well, they're in the space of totally new ideas.
link |
02:25:24.640
Can that be done in developing theorems?
link |
02:25:28.960
We know it can be done by neural networks
link |
02:25:30.800
because we did it with the neural networks
link |
02:25:32.280
in the craniums of the great mathematicians of humanity.
link |
02:25:36.280
And I'm so glad you brought up alpha zero
link |
02:25:38.640
because that's the counter example.
link |
02:25:39.960
It turned out we were flattering ourselves
link |
02:25:41.840
when we said intuition is something different.
link |
02:25:45.360
Only humans can do it.
link |
02:25:46.520
It's not information processing.
link |
02:25:50.880
It used to be that way.
link |
02:25:53.720
Again, it's really instructive, I think,
link |
02:25:56.200
to compare the chess computer Deep Blue
link |
02:25:58.480
that beat Kasparov with alpha zero
link |
02:26:02.040
that beat Lisa Dahl at Go.
link |
02:26:04.280
Because for Deep Blue, there was no intuition.
link |
02:26:08.640
There was some, humans had programmed in some intuition.
link |
02:26:12.000
After humans had played a lot of games,
link |
02:26:13.600
they told the computer, count the pawn as one point,
link |
02:26:16.520
the bishop is three points, rook is five points,
link |
02:26:19.920
and so on, you add it all up,
link |
02:26:21.120
and then you add some extra points for past pawns
link |
02:26:23.400
and subtract if the opponent has it and blah, blah, blah.
link |
02:26:28.280
And then what Deep Blue did was just search.
link |
02:26:32.520
Just very brute force and tried many, many moves ahead,
link |
02:26:34.960
all these combinations and a prune tree search.
link |
02:26:37.400
And it could think much faster than Kasparov, and it won.
link |
02:26:42.680
And that, I think, inflated our egos
link |
02:26:45.440
in a way it shouldn't have,
link |
02:26:46.560
because people started to say, yeah, yeah,
link |
02:26:48.760
it's just brute force search, but it has no intuition.
link |
02:26:52.280
Alpha zero really popped our bubble there,
link |
02:26:57.760
because what alpha zero does,
link |
02:27:00.880
yes, it does also do some of that tree search,
link |
02:27:03.880
but it also has this intuition module,
link |
02:27:06.560
which in geek speak is called a value function,
link |
02:27:09.560
where it just looks at the board
link |
02:27:11.120
and comes up with a number for how good is that position.
link |
02:27:14.960
The difference was no human told it
link |
02:27:17.960
how good the position is, it just learned it.
link |
02:27:22.480
And mu zero is the coolest or scariest of all,
link |
02:27:26.840
depending on your mood,
link |
02:27:28.320
because the same basic AI system
link |
02:27:33.040
will learn what the good board position is,
link |
02:27:35.320
regardless of whether it's chess or Go or Shogi
link |
02:27:38.640
or Pacman or Lady Pacman or Breakout or Space Invaders
link |
02:27:42.920
or any number, a bunch of other games.
link |
02:27:45.000
You don't tell it anything,
link |
02:27:45.840
and it gets this intuition after a while for what's good.
link |
02:27:49.760
So this is very hopeful for science, I think,
link |
02:27:52.760
because if it can get intuition
link |
02:27:55.240
for what's a good position there,
link |
02:27:57.280
maybe it can also get intuition
link |
02:27:58.880
for what are some good directions to go
link |
02:28:00.640
if you're trying to prove something.
link |
02:28:03.040
I often, one of the most fun things in my science career
link |
02:28:06.400
is when I've been able to prove some theorem about something
link |
02:28:08.600
and it's very heavily intuition guided, of course.
link |
02:28:12.160
I don't sit and try all random strings.
link |
02:28:14.080
I have a hunch that, you know,
link |
02:28:16.280
this reminds me a little bit of about this other proof
link |
02:28:18.840
I've seen for this thing.
link |
02:28:19.920
So maybe I first, what if I try this?
link |
02:28:22.520
Nah, that didn't work out.
link |
02:28:24.720
But this reminds me actually,
link |
02:28:25.840
the way this failed reminds me of that.
link |
02:28:28.560
So combining the intuition with all these brute force
link |
02:28:33.880
capabilities, I think it's gonna be able to help physics too.
link |
02:28:38.520
Do you think there'll be a day when an AI system
link |
02:28:42.880
being the primary contributor, let's say 90% plus,
link |
02:28:46.400
wins the Nobel Prize in physics?
link |
02:28:50.400
Obviously they'll give it to the humans
link |
02:28:51.960
because we humans don't like to give prizes to machines.
link |
02:28:54.800
It'll give it to the humans behind the system.
link |
02:28:57.560
You could argue that AI has already been involved
link |
02:28:59.920
in some Nobel Prizes, probably,
link |
02:29:01.560
maybe something with black holes and stuff like that.
link |
02:29:03.560
Yeah, we don't like giving prizes to other life forms.
link |
02:29:07.160
If someone wins a horse racing contest,
link |
02:29:09.720
they don't give the prize to the horse either.
link |
02:29:11.360
That's true.
link |
02:29:13.400
But do you think that we might be able to see
link |
02:29:16.000
something like that in our lifetimes when AI,
link |
02:29:19.200
so like the first system I would say
link |
02:29:21.840
that makes us think about a Nobel Prize seriously
link |
02:29:25.360
is like Alpha Fold is making us think about
link |
02:29:28.760
in medicine, physiology, a Nobel Prize,
link |
02:29:31.960
perhaps discoveries that are direct result
link |
02:29:34.080
of something that's discovered by Alpha Fold.
link |
02:29:36.640
Do you think in physics we might be able
link |
02:29:39.560
to see that in our lifetimes?
link |
02:29:41.520
I think what's probably gonna happen
link |
02:29:43.520
is more of a blurring of the distinctions.
link |
02:29:46.880
So today if somebody uses a computer
link |
02:29:53.000
to do a computation that gives them the Nobel Prize,
link |
02:29:54.920
nobody's gonna dream of giving the prize to the computer.
link |
02:29:57.160
They're gonna be like, that was just a tool.
link |
02:29:59.000
I think for these things also,
link |
02:30:02.120
people are just gonna for a long time
link |
02:30:04.000
view the computer as a tool.
link |
02:30:06.120
But what's gonna change is the ubiquity of machine learning.
link |
02:30:11.120
I think at some point in my lifetime,
link |
02:30:17.120
finding a human physicist who knows nothing
link |
02:30:21.400
about machine learning is gonna be almost as hard
link |
02:30:23.800
as it is today finding a human physicist
link |
02:30:25.960
who doesn't says, oh, I don't know anything about computers
link |
02:30:29.160
or I don't use math.
link |
02:30:30.880
That would just be a ridiculous concept.
link |
02:30:34.000
You see, but the thing is there is a magic moment though,
link |
02:30:38.240
like with Alpha Zero, when the system surprises us
link |
02:30:42.320
in a way where the best people in the world
link |
02:30:46.680
truly learn something from the system
link |
02:30:48.960
in a way where you feel like it's another entity.
link |
02:30:52.480
Like the way people, the way Magnus Carlsen,
link |
02:30:54.920
the way certain people are looking at the work of Alpha Zero,
link |
02:30:58.080
it's like, it truly is no longer a tool
link |
02:31:02.960
in the sense that it doesn't feel like a tool.
link |
02:31:06.680
It feels like some other entity.
link |
02:31:08.960
So there's a magic difference like where you're like,
link |
02:31:13.320
if an AI system is able to come up with an insight
link |
02:31:17.320
that surprises everybody in some like major way
link |
02:31:23.760
that's a phase shift in our understanding
link |
02:31:25.960
of some particular science
link |
02:31:27.760
or some particular aspect of physics,
link |
02:31:30.040
I feel like that is no longer a tool.
link |
02:31:32.680
And then you can start to say
link |
02:31:35.800
that like it perhaps deserves the prize.
link |
02:31:38.720
So for sure, the more important
link |
02:31:40.640
and the more fundamental transformation
link |
02:31:43.120
of the 21st century science is exactly what you're saying,
link |
02:31:46.640
which is probably everybody will be doing machine learning.
link |
02:31:50.680
It's to some degree.
link |
02:31:51.560
Like if you want to be successful
link |
02:31:54.760
at unlocking the mysteries of science,
link |
02:31:57.560
you should be doing machine learning.
link |
02:31:58.800
But it's just exciting to think about like,
link |
02:32:01.440
whether there'll be one that comes along
link |
02:32:03.080
that's super surprising and they'll make us question
link |
02:32:08.000
like who the real inventors are in this world.
link |
02:32:10.320
Yeah.
link |
02:32:11.640
Yeah, I think the question of,
link |
02:32:14.240
isn't if it's gonna happen, but when?
link |
02:32:15.880
And, but it's important.
link |
02:32:17.960
Honestly, in my mind, the time when that happens
link |
02:32:20.840
is also more or less the same time
link |
02:32:23.360
when we get artificial general intelligence.
link |
02:32:25.560
And then we have a lot bigger things to worry about
link |
02:32:28.160
than whether we should get the Nobel prize or not, right?
link |
02:32:31.000
Yeah.
link |
02:32:31.840
Because when you have machines
link |
02:32:35.000
that can outperform our best scientists at science,
link |
02:32:39.360
they can probably outperform us
link |
02:32:41.040
at a lot of other stuff as well,
link |
02:32:44.440
which can at a minimum make them
link |
02:32:46.440
incredibly powerful agents in the world.
link |
02:32:49.440
And I think it's a mistake to think
link |
02:32:53.160
we only have to start worrying about loss of control
link |
02:32:57.040
when machines get to AGI across the board,
link |
02:32:59.720
where they can do everything, all our jobs.
link |
02:33:02.160
Long before that, they'll be hugely influential.
link |
02:33:07.880
We talked at length about how the hacking of our minds
link |
02:33:12.560
with algorithms trying to get us glued to our screens,
link |
02:33:18.440
right, has already had a big impact on society.
link |
02:33:22.320
That was an incredibly dumb algorithm
link |
02:33:24.080
in the grand scheme of things, right?
link |
02:33:25.840
The supervised machine learning,
link |
02:33:27.840
yet that had huge impact.
link |
02:33:29.520
So I just don't want us to be lulled
link |
02:33:32.080
into false sense of security
link |
02:33:33.280
and think there won't be any societal impact
link |
02:33:35.560
until things reach human level,
link |
02:33:37.040
because it's happening already.
link |
02:33:38.280
And I was just thinking the other week,
link |
02:33:40.560
when I see some scaremonger going,
link |
02:33:44.880
oh, the robots are coming,
link |
02:33:47.080
the implication is always that they're coming to kill us.
link |
02:33:50.280
Yeah.
link |
02:33:51.120
And maybe you should have worried about that
link |
02:33:52.360
if you were in Nagorno Karabakh
link |
02:33:54.720
during the recent war there.
link |
02:33:55.720
But more seriously, the robots are coming right now,
link |
02:34:01.440
but they're mainly not coming to kill us.
link |
02:34:03.160
They're coming to hack us.
link |
02:34:06.000
They're coming to hack our minds,
link |
02:34:08.200
into buying things that maybe we didn't need,
link |
02:34:11.280
to vote for people who may not have
link |
02:34:13.200
our best interest in mind.
link |
02:34:15.360
And it's kind of humbling, I think,
link |
02:34:17.600
actually, as a human being to admit
link |
02:34:20.120
that it turns out that our minds are actually
link |
02:34:22.360
much more hackable than we thought.
link |
02:34:24.760
And the ultimate insult is that we are actually
link |
02:34:27.040
getting hacked by the machine learning algorithms
link |
02:34:30.400
that are, in some objective sense,
link |
02:34:31.560
much dumber than us, you know?
link |
02:34:33.960
But maybe we shouldn't be so surprised
link |
02:34:35.720
because, you know, how do you feel about cute puppies?
link |
02:34:40.520
Love them.
link |
02:34:41.600
So, you know, you would probably argue
link |
02:34:43.640
that in some across the board measure,
link |
02:34:46.120
you're more intelligent than they are,
link |
02:34:47.680
but boy, are cute puppies good at hacking us, right?
link |
02:34:50.760
Yeah.
link |
02:34:51.600
They move into our house, persuade us to feed them
link |
02:34:53.720
and do all these things.
link |
02:34:54.560
And what do they ever do but for us?
link |
02:34:56.600
Yeah.
link |
02:34:57.440
Other than being cute and making us feel good, right?
link |
02:35:00.520
So if puppies can hack us,
link |
02:35:03.080
maybe we shouldn't be so surprised
link |
02:35:04.920
if pretty dumb machine learning algorithms can hack us too.
link |
02:35:09.040
Not to speak of cats, which is another level.
link |
02:35:11.680
And I think we should,
link |
02:35:13.400
to counter your previous point about there,
link |
02:35:15.640
let us not think about evil creatures in this world.
link |
02:35:18.040
We can all agree that cats are as close
link |
02:35:20.480
to objective evil as we can get.
link |
02:35:22.960
But that's just me saying that.
link |
02:35:24.400
Okay, so you have.
link |
02:35:25.480
Have you seen the cartoon?
link |
02:35:27.320
I think it's maybe the onion
link |
02:35:31.760
with this incredibly cute kitten.
link |
02:35:33.720
And it just says, it's underneath something
link |
02:35:36.840
that thinks about murder all day.
link |
02:35:38.920
Exactly.
link |
02:35:41.560
That's accurate.
link |
02:35:43.080
You've mentioned offline that there might be a link
link |
02:35:45.200
between post biological AGI and SETI.
link |
02:35:47.960
So last time we talked,
link |
02:35:52.520
you've talked about this intuition
link |
02:35:54.920
that we humans might be quite unique
link |
02:35:59.280
in our galactic neighborhood.
link |
02:36:02.360
Perhaps our galaxy,
link |
02:36:03.680
perhaps the entirety of the observable universe
link |
02:36:06.360
who might be the only intelligent civilization here,
link |
02:36:10.840
which is, and you argue pretty well for that thought.
link |
02:36:17.720
So I have a few little questions around this.
link |
02:36:21.240
One, the scientific question,
link |
02:36:24.680
in which way would you be,
link |
02:36:29.240
if you were wrong in that intuition,
link |
02:36:33.960
in which way do you think you would be surprised?
link |
02:36:36.680
Like why were you wrong?
link |
02:36:38.520
We find out that you ended up being wrong.
link |
02:36:41.600
Like in which dimension?
link |
02:36:43.880
So like, is it because we can't see them?
link |
02:36:48.400
Is it because the nature of their intelligence
link |
02:36:51.320
or the nature of their life is totally different
link |
02:36:54.760
than we can possibly imagine?
link |
02:36:56.760
Is it because the,
link |
02:37:00.680
I mean, something about the great filters
link |
02:37:02.640
and surviving them,
link |
02:37:04.440
or maybe because we're being protected from signals,
link |
02:37:08.760
all those explanations for why we haven't heard
link |
02:37:15.120
a big, loud, like red light that says we're here.
link |
02:37:21.680
So there are actually two separate things there
link |
02:37:23.520
that I could be wrong about,
link |
02:37:24.720
two separate claims that I made, right?
link |
02:37:28.920
One of them is, I made the claim,
link |
02:37:32.240
I think most civilizations,
link |
02:37:36.960
when you're going from simple bacteria like things
link |
02:37:41.800
to space colonizing civilizations,
link |
02:37:47.840
they spend only a very, very tiny fraction
link |
02:37:50.840
of their life being where we are.
link |
02:37:55.160
That I could be wrong about.
link |
02:37:57.280
The other one I could be wrong about
link |
02:37:58.760
is the quite different statement that I think that actually
link |
02:38:01.520
I'm guessing that we are the only civilization
link |
02:38:04.680
in our observable universe
link |
02:38:06.120
from which light has reached us so far
link |
02:38:08.240
that's actually gotten far enough to invent telescopes.
link |
02:38:12.320
So let's talk about maybe both of them in turn
link |
02:38:13.960
because they really are different.
link |
02:38:15.000
The first one, if you look at the N equals one,
link |
02:38:19.880
the data point we have on this planet, right?
link |
02:38:22.080
So we spent four and a half billion years
link |
02:38:25.880
fluxing around on this planet with life, right?
link |
02:38:28.240
We got, and most of it was pretty lame stuff
link |
02:38:32.080
from an intelligence perspective,
link |
02:38:33.640
you know, it was bacteria and then the dinosaurs spent,
link |
02:38:39.200
then the things gradually accelerated, right?
link |
02:38:41.280
Then the dinosaurs spent over a hundred million years
link |
02:38:43.600
stomping around here without even inventing smartphones.
link |
02:38:46.960
And then very recently, you know,
link |
02:38:50.240
it's only, we've only spent 400 years
link |
02:38:52.120
going from Newton to us, right?
link |
02:38:55.320
In terms of technology.
link |
02:38:56.480
And look what we've done even, you know,
link |
02:39:00.160
when I was a little kid, there was no internet even.
link |
02:39:02.600
So it's, I think it's pretty likely for,
link |
02:39:05.880
in this case of this planet, right?
link |
02:39:08.160
That we're either gonna really get our act together
link |
02:39:12.200
and start spreading life into space, the century,
link |
02:39:15.080
and doing all sorts of great things,
link |
02:39:16.440
or we're gonna wipe out.
link |
02:39:18.880
It's a little hard.
link |
02:39:20.080
If I, I could be wrong in the sense that maybe
link |
02:39:23.480
what happened on this earth is very atypical.
link |
02:39:25.800
And for some reason, what's more common on other planets
link |
02:39:28.520
is that they spend an enormously long time
link |
02:39:31.440
futzing around with the ham radio and things,
link |
02:39:33.720
but they just never really take it to the next level
link |
02:39:36.200
for reasons I don't, I haven't understood.
link |
02:39:38.400
I'm humble and open to that.
link |
02:39:40.200
But I would bet at least 10 to one
link |
02:39:42.880
that our situation is more typical
link |
02:39:45.160
because the whole thing with Moore's law
link |
02:39:46.760
and accelerating technology,
link |
02:39:48.160
it's pretty obvious why it's happening.
link |
02:39:51.200
Everything that grows exponentially,
link |
02:39:52.880
we call it an explosion,
link |
02:39:54.080
whether it's a population explosion or a nuclear explosion,
link |
02:39:56.640
it's always caused by the same thing.
link |
02:39:58.000
It's that the next step triggers a step after that.
link |
02:40:01.480
So I, we, tomorrow's technology,
link |
02:40:04.320
today's technology enables tomorrow's technology
link |
02:40:06.760
and that enables the next level.
link |
02:40:09.080
And as I think, because the technology is always better,
link |
02:40:13.800
of course, the steps can come faster and faster.
link |
02:40:17.200
On the other question that I might be wrong about,
link |
02:40:19.160
that's the much more controversial one, I think.
link |
02:40:22.320
But before we close out on this thing about,
link |
02:40:24.920
if, the first one, if it's true
link |
02:40:27.080
that most civilizations spend only a very short amount
link |
02:40:30.520
of their total time in the stage, say,
link |
02:40:32.880
between inventing
link |
02:40:37.320
telescopes or mastering electricity
link |
02:40:40.760
and leaving there and doing space travel,
link |
02:40:43.880
if that's actually generally true,
link |
02:40:46.200
then that should apply also elsewhere out there.
link |
02:40:49.000
So we should be very, very,
link |
02:40:51.040
we should be very, very surprised
link |
02:40:52.920
if we find some random civilization
link |
02:40:55.480
and we happen to catch them exactly
link |
02:40:56.920
in that very, very short stage.
link |
02:40:58.800
It's much more likely
link |
02:40:59.640
that we find a planet full of bacteria.
link |
02:41:02.960
Or that we find some civilization
link |
02:41:05.560
that's already post biological
link |
02:41:07.480
and has done some really cool galactic construction projects
link |
02:41:11.880
in their galaxy.
link |
02:41:13.360
Would we be able to recognize them, do you think?
link |
02:41:15.200
Is it possible that we just can't,
link |
02:41:17.480
I mean, this post biological world,
link |
02:41:21.120
could it be just existing in some other dimension?
link |
02:41:23.520
It could be just all a virtual reality game
link |
02:41:26.280
for them or something, I don't know,
link |
02:41:28.480
that it changes completely
link |
02:41:30.560
where we won't be able to detect.
link |
02:41:32.880
We have to be honestly very humble about this.
link |
02:41:35.280
I think I said earlier the number one principle
link |
02:41:39.000
of being a scientist is you have to be humble
link |
02:41:40.840
and willing to acknowledge that everything we think,
link |
02:41:42.960
guess might be totally wrong.
link |
02:41:45.040
Of course, you could imagine some civilization
link |
02:41:46.960
where they all decide to become Buddhists
link |
02:41:48.640
and very inward looking
link |
02:41:49.880
and just move into their little virtual reality
link |
02:41:52.360
and not disturb the flora and fauna around them
link |
02:41:55.120
and we might not notice them.
link |
02:41:58.120
But this is a numbers game, right?
link |
02:41:59.960
If you have millions of civilizations out there
link |
02:42:02.280
or billions of them,
link |
02:42:03.680
all it takes is one with a more ambitious mentality
link |
02:42:08.080
that decides, hey, we are gonna go out
link |
02:42:10.280
and settle a bunch of other solar systems
link |
02:42:15.520
and maybe galaxies.
link |
02:42:17.560
And then it doesn't matter
link |
02:42:18.440
if they're a bunch of quiet Buddhists,
link |
02:42:19.640
we're still gonna notice that expansionist one, right?
link |
02:42:23.040
And it seems like quite the stretch to assume that,
link |
02:42:26.560
now we know even in our own galaxy
link |
02:42:28.120
that there are probably a billion or more planets
link |
02:42:33.120
that are pretty Earth like.
link |
02:42:35.280
And many of them are formed over a billion years
link |
02:42:37.680
before ours, so had a big head start.
link |
02:42:40.640
So if you actually assume also
link |
02:42:43.600
that life happens kind of automatically
link |
02:42:46.120
on an Earth like planet,
link |
02:42:48.440
I think it's quite the stretch to then go and say,
link |
02:42:52.080
okay, so there are another billion civilizations out there
link |
02:42:55.280
that also have our level of tech
link |
02:42:56.840
and they all decided to become Buddhists
link |
02:42:59.280
and not a single one decided to go Hitler on the galaxy
link |
02:43:02.880
and say, we need to go out and colonize
link |
02:43:05.280
or not a single one decided for more benevolent reasons
link |
02:43:08.840
to go out and get more resources.
link |
02:43:11.480
That seems like a bit of a stretch, frankly.
link |
02:43:13.840
And this leads into the second thing
link |
02:43:16.560
you challenged me that I might be wrong about,
link |
02:43:18.560
how rare or common is life, you know?
link |
02:43:22.320
So Francis Drake, when he wrote down the Drake equation,
link |
02:43:25.120
multiplied together a huge number of factors
link |
02:43:27.560
and then we don't know any of them.
link |
02:43:29.320
So we know even less about what you get
link |
02:43:31.480
when you multiply together the whole product.
link |
02:43:35.120
Since then, a lot of those factors
link |
02:43:37.200
have become much better known.
link |
02:43:38.880
One of his big uncertainties was
link |
02:43:40.840
how common is it that a solar system even has a planet?
link |
02:43:44.360
Well, now we know it very common.
link |
02:43:46.280
Earth like planets, we know we have better.
link |
02:43:48.320
There are a dime a dozen, there are many, many of them,
link |
02:43:50.440
even in our galaxy.
link |
02:43:52.080
At the same time, you know, we have thanks to,
link |
02:43:55.000
I'm a big supporter of the SETI project and its cousins
link |
02:43:58.840
and I think we should keep doing this
link |
02:44:00.520
and we've learned a lot.
link |
02:44:02.400
We've learned that so far,
link |
02:44:03.800
all we have is still unconvincing hints, nothing more, right?
link |
02:44:08.040
And there are certainly many scenarios
link |
02:44:10.320
where it would be dead obvious.
link |
02:44:13.080
If there were a hundred million
link |
02:44:15.920
other human like civilizations in our galaxy,
link |
02:44:19.000
it would not be that hard to notice some of them
link |
02:44:21.600
with today's technology and we haven't, right?
link |
02:44:23.440
So what we can say is, well, okay,
link |
02:44:27.720
we can rule out that there is a human level of civilization
link |
02:44:30.560
on the moon and in fact, the many nearby solar systems
link |
02:44:34.120
where we cannot rule out, of course,
link |
02:44:37.600
that there is something like Earth sitting in a galaxy
link |
02:44:41.560
five billion light years away.
link |
02:44:45.120
But we've ruled out a lot
link |
02:44:46.400
and that's already kind of shocking
link |
02:44:48.480
given that there are all these planets there, you know?
link |
02:44:50.320
So like, where are they?
link |
02:44:51.440
Where are they all?
link |
02:44:52.280
That's the classic Fermi paradox.
link |
02:44:54.880
And so my argument, which might very well be wrong,
link |
02:44:59.240
it's very simple really, it just goes like this.
link |
02:45:01.400
Okay, we have no clue about this.
link |
02:45:05.240
It could be the probability of getting life
link |
02:45:07.800
on a random planet, it could be 10 to the minus one
link |
02:45:11.240
a priori or 10 to the minus five, 10, 10 to the minus 20,
link |
02:45:14.680
10 to the minus 30, 10 to the minus 40.
link |
02:45:17.400
Basically every order of magnitude is about equally likely.
link |
02:45:21.400
When then do the math and ask the question,
link |
02:45:24.120
how close is our nearest neighbor?
link |
02:45:27.400
It's again, equally likely that it's 10 to the 10 meters away,
link |
02:45:30.520
10 to 20 meters away, 10 to the 30 meters away.
link |
02:45:33.440
We have some nerdy ways of talking about this
link |
02:45:35.640
with Bayesian statistics and a uniform log prior,
link |
02:45:38.080
but that's irrelevant.
link |
02:45:39.360
This is the simple basic argument.
link |
02:45:42.040
And now comes the data.
link |
02:45:43.320
So we can say, okay, there are all these orders
link |
02:45:46.320
of magnitude, 10 to the 26 meters away,
link |
02:45:49.280
there's the edge of our observable universe.
link |
02:45:51.960
If it's farther than that, light hasn't even reached us yet.
link |
02:45:54.840
If it's less than 10 to the 16 meters away,
link |
02:45:58.040
well, it's within Earth's,
link |
02:46:02.320
it's no farther away than the sun.
link |
02:46:03.800
We can definitely rule that out.
link |
02:46:07.200
So I think about it like this,
link |
02:46:08.520
a priori before we looked at the telescopes,
link |
02:46:11.840
it could be 10 to the 10 meters, 10 to the 20,
link |
02:46:14.320
10 to the 30, 10 to the 40, 10 to the 50, 10 to blah, blah, blah.
link |
02:46:16.520
Equally likely anywhere here.
link |
02:46:18.040
And now we've ruled out like this chunk.
link |
02:46:21.760
And here is the edge of our observable universe already.
link |
02:46:27.880
So I'm certainly not saying I don't think
link |
02:46:30.560
there's any life elsewhere in space.
link |
02:46:32.480
If space is infinite,
link |
02:46:33.680
then you're basically a hundred percent guaranteed
link |
02:46:35.640
that there is, but the probability that there is life,
link |
02:46:41.200
that the nearest neighbor,
link |
02:46:42.280
it happens to be in this little region
link |
02:46:43.760
between where we would have seen it already
link |
02:46:47.120
and where we will never see it.
link |
02:46:48.680
There's actually significantly less than one, I think.
link |
02:46:51.920
And I think there's a moral lesson from this,
link |
02:46:54.280
which is really important,
link |
02:46:55.840
which is to be good stewards of this planet
link |
02:47:00.120
and this shot we've had.
link |
02:47:01.440
It can be very dangerous to say,
link |
02:47:03.640
oh, it's fine if we nuke our planet or ruin the climate
link |
02:47:07.640
or mess it up with unaligned AI,
link |
02:47:10.280
because I know there is this nice Star Trek fleet out there.
link |
02:47:15.160
They're gonna swoop in and take over where we failed.
link |
02:47:18.040
Just like it wasn't the big deal
link |
02:47:19.840
that the Easter Island losers wiped themselves out.
link |
02:47:23.040
That's a dangerous way of lulling yourself
link |
02:47:25.200
into false sense of security.
link |
02:47:27.760
If it's actually the case that it might be up to us
link |
02:47:32.000
and only us, the whole future of intelligent life
link |
02:47:35.000
in our observable universe,
link |
02:47:37.680
then I think it really puts a lot of responsibility
link |
02:47:42.440
on our shoulders.
link |
02:47:43.280
It's inspiring, it's a little bit terrifying,
link |
02:47:45.320
but it's also inspiring.
link |
02:47:46.480
But it's empowering, I think, most of all,
link |
02:47:48.600
because the biggest problem today is,
link |
02:47:50.240
I see this even when I teach,
link |
02:47:53.120
so many people feel that it doesn't matter what they do
link |
02:47:56.360
or we do, we feel disempowered.
link |
02:47:58.760
Oh, it makes no difference.
link |
02:48:02.560
This is about as far from that as you can come.
link |
02:48:05.080
But we realize that what we do
link |
02:48:07.760
on our little spinning ball here in our lifetime
link |
02:48:12.200
could make the difference for the entire future of life
link |
02:48:15.440
in our universe.
link |
02:48:17.080
How empowering is that?
link |
02:48:18.720
Yeah, survival of consciousness.
link |
02:48:20.280
I mean, a very similar kind of empowering aspect
link |
02:48:25.840
of the Drake equation is,
link |
02:48:27.680
say there is a huge number of intelligent civilizations
link |
02:48:31.120
that spring up everywhere,
link |
02:48:32.920
but because of the Drake equation,
link |
02:48:34.760
which is the lifetime of a civilization,
link |
02:48:38.000
maybe many of them hit a wall.
link |
02:48:39.880
And just like you said, it's clear that that,
link |
02:48:43.360
for us, the great filter,
link |
02:48:45.920
the one possible great filter seems to be coming
link |
02:48:49.040
in the next 100 years.
link |
02:48:51.240
So it's also empowering to say,
link |
02:48:53.720
okay, well, we have a chance to not,
link |
02:48:58.720
I mean, the way great filters work,
link |
02:49:00.120
they just get most of them.
link |
02:49:02.080
Exactly.
link |
02:49:02.920
Nick Bostrom has articulated this really beautifully too.
link |
02:49:06.120
Every time yet another search for life on Mars
link |
02:49:09.480
comes back negative or something,
link |
02:49:11.120
I'm like, yes, yes.
link |
02:49:14.760
Our odds for us surviving is the best.
link |
02:49:17.840
You already made the argument in broad brush there, right?
link |
02:49:20.960
But just to unpack it, right?
link |
02:49:22.560
The point is we already know
link |
02:49:26.880
there is a crap ton of planets out there
link |
02:49:28.640
that are Earth like,
link |
02:49:29.640
and we also know that most of them do not seem
link |
02:49:33.160
to have anything like our kind of life on them.
link |
02:49:35.080
So what went wrong?
link |
02:49:37.240
There's clearly one step along the evolutionary,
link |
02:49:39.520
at least one filter or roadblock
link |
02:49:42.360
in going from no life to spacefaring life.
link |
02:49:45.600
And where is it?
link |
02:49:48.160
Is it in front of us or is it behind us, right?
link |
02:49:51.640
If there's no filter behind us,
link |
02:49:54.080
and we keep finding all sorts of little mice on Mars
link |
02:50:00.520
or whatever, right?
link |
02:50:01.880
That's actually very depressing
link |
02:50:03.120
because that makes it much more likely
link |
02:50:04.440
that the filter is in front of us.
link |
02:50:06.280
And that what actually is going on
link |
02:50:08.080
is like the ultimate dark joke
link |
02:50:11.080
that whenever a civilization
link |
02:50:13.800
invents sufficiently powerful tech,
link |
02:50:15.640
it's just, you just set your clock.
link |
02:50:17.240
And then after a little while it goes poof
link |
02:50:19.160
for one reason or other and wipes itself out.
link |
02:50:21.840
Now wouldn't that be like utterly depressing
link |
02:50:24.240
if we're actually doomed?
link |
02:50:26.120
Whereas if it turns out that there is a really,
link |
02:50:29.720
there is a great filter early on
link |
02:50:31.720
that for whatever reason seems to be really hard
link |
02:50:33.960
to get to the stage of sexually reproducing organisms
link |
02:50:39.160
or even the first ribosome or whatever, right?
link |
02:50:43.320
Or maybe you have lots of planets with dinosaurs and cows,
link |
02:50:47.160
but for some reason they tend to get stuck there
link |
02:50:48.880
and never invent smartphones.
link |
02:50:50.840
All of those are huge boosts for our own odds
link |
02:50:55.200
because been there done that, you know?
link |
02:50:58.840
It doesn't matter how hard or unlikely it was
link |
02:51:01.720
that we got past that roadblock
link |
02:51:03.800
because we already did.
link |
02:51:05.120
And then that makes it likely
link |
02:51:07.520
that the future is in our own hands, we're not doomed.
link |
02:51:11.440
So that's why I think the fact
link |
02:51:14.800
that life is rare in the universe,
link |
02:51:18.280
it's not just something that there is some evidence for,
link |
02:51:21.440
but also something we should actually hope for.
link |
02:51:26.680
So that's the end, the mortality,
link |
02:51:29.920
the death of human civilization
link |
02:51:31.520
that we've been discussing in life,
link |
02:51:33.120
maybe prospering beyond any kind of great filter.
link |
02:51:36.680
Do you think about your own death?
link |
02:51:39.440
Does it make you sad that you may not witness some of the,
link |
02:51:45.760
you know, you lead a research group
link |
02:51:47.440
on working some of the biggest questions
link |
02:51:49.040
in the universe actually,
link |
02:51:51.080
both on the physics and the AI side?
link |
02:51:53.720
Does it make you sad that you may not be able
link |
02:51:55.560
to see some of these exciting things come to fruition
link |
02:51:58.840
that we've been talking about?
link |
02:52:00.640
Of course, of course it sucks, the fact that I'm gonna die.
link |
02:52:04.840
I remember once when I was much younger,
link |
02:52:07.200
my dad made this remark that life is fundamentally tragic.
link |
02:52:10.800
And I'm like, what are you talking about, daddy?
link |
02:52:13.080
And then many years later, I felt,
link |
02:52:15.640
now I feel I totally understand what he means.
link |
02:52:17.320
You know, we grow up, we're little kids
link |
02:52:19.040
and everything is infinite and it's so cool.
link |
02:52:21.920
And then suddenly we find out that actually, you know,
link |
02:52:25.800
you got to serve only,
link |
02:52:26.840
this is the, you're gonna get game over at some point.
link |
02:52:30.280
So of course it's something that's sad.
link |
02:52:36.400
Are you afraid?
link |
02:52:42.640
No, not in the sense that I think anything terrible
link |
02:52:46.000
is gonna happen after I die or anything like that.
link |
02:52:48.240
No, I think it's really gonna be a game over,
link |
02:52:50.960
but it's more that it makes me very acutely aware
link |
02:52:56.280
of what a wonderful gift this is
link |
02:52:57.920
that I get to be alive right now.
link |
02:53:00.200
And is a steady reminder to just live life to the fullest
link |
02:53:04.680
and really enjoy it because it is finite, you know.
link |
02:53:08.000
And I think actually, and we know we all get
link |
02:53:11.240
the regular reminders when someone near and dear to us dies
link |
02:53:14.280
that one day it's gonna be our turn.
link |
02:53:19.560
It adds this kind of focus.
link |
02:53:21.480
I wonder what it would feel like actually
link |
02:53:23.680
to be an immortal being if they might even enjoy
link |
02:53:26.960
some of the wonderful things of life a little bit less
link |
02:53:29.440
just because there isn't that.
link |
02:53:33.400
Finiteness?
link |
02:53:34.320
Yeah.
link |
02:53:35.160
Do you think that could be a feature, not a bug,
link |
02:53:38.040
the fact that we beings are finite?
link |
02:53:42.040
Maybe there's lessons for engineering
link |
02:53:44.320
in artificial intelligence systems as well
link |
02:53:46.940
that are conscious.
link |
02:53:48.400
Like do you think it makes, is it possible
link |
02:53:53.920
that the reason the pistachio ice cream is delicious
link |
02:53:56.960
is the fact that you're going to die one day
link |
02:53:59.920
and you will not have all the pistachio ice cream
link |
02:54:03.720
that you could eat because of that fact?
link |
02:54:06.200
Well, let me say two things.
link |
02:54:07.560
First of all, it's actually quite profound
link |
02:54:09.660
what you're saying.
link |
02:54:10.500
I do think I appreciate the pistachio ice cream
link |
02:54:12.300
a lot more knowing that I will,
link |
02:54:14.400
there's only a finite number of times I get to enjoy that.
link |
02:54:17.760
And I can only remember a finite number of times
link |
02:54:19.900
in the past.
link |
02:54:21.720
And moreover, my life is not so long
link |
02:54:25.120
that it just starts to feel like things are repeating
link |
02:54:26.800
themselves in general.
link |
02:54:28.120
It's so new and fresh.
link |
02:54:30.520
I also think though that death is a little bit overrated
link |
02:54:36.400
in the sense that it comes from a sort of outdated view
link |
02:54:42.020
of physics and what life actually is.
link |
02:54:45.640
Because if you ask, okay, what is it that's gonna die
link |
02:54:49.120
exactly, what am I really?
link |
02:54:52.040
When I say I feel sad about the idea of myself dying,
link |
02:54:56.000
am I really sad that this skin cell here is gonna die?
link |
02:54:59.180
Of course not, because it's gonna die next week anyway
link |
02:55:01.600
and I'll grow a new one, right?
link |
02:55:04.020
And it's not any of my cells that I'm associating really
link |
02:55:08.440
with who I really am.
link |
02:55:11.000
Nor is it any of my atoms or quarks or electrons.
link |
02:55:15.640
In fact, basically all of my atoms get replaced
link |
02:55:19.380
on a regular basis, right?
link |
02:55:20.520
So what is it that's really me
link |
02:55:22.880
from a more modern physics perspective?
link |
02:55:24.320
It's the information in processing me.
link |
02:55:28.800
That's where my memory, that's my memories,
link |
02:55:31.520
that's my values, my dreams, my passion, my love.
link |
02:55:40.560
That's what's really fundamentally me.
link |
02:55:43.580
And frankly, not all of that will die when my body dies.
link |
02:55:48.580
Like Richard Feynman, for example, his body died of cancer,
link |
02:55:55.100
but many of his ideas that he felt made him very him
link |
02:55:59.720
actually live on.
link |
02:56:01.400
This is my own little personal tribute to Richard Feynman.
link |
02:56:04.100
I try to keep a little bit of him alive in myself.
link |
02:56:07.500
I've even quoted him today, right?
link |
02:56:09.620
Yeah, he almost came alive for a brief moment
link |
02:56:11.740
in this conversation, yeah.
link |
02:56:13.320
Yeah, and this honestly gives me some solace.
link |
02:56:17.500
When I work as a teacher, I feel,
link |
02:56:20.780
if I can actually share a bit about myself
link |
02:56:25.820
that my students feel worthy enough to copy and adopt
link |
02:56:30.740
as some part of things that they know
link |
02:56:33.140
or they believe or aspire to,
link |
02:56:36.140
now I live on also a little bit in them, right?
link |
02:56:39.540
And so being a teacher is a little bit
link |
02:56:44.540
of what I, that's something also that contributes
link |
02:56:49.740
to making me a little teeny bit less mortal, right?
link |
02:56:53.740
Because I'm not, at least not all gonna die all at once,
link |
02:56:56.740
right?
link |
02:56:57.580
And I find that a beautiful tribute to people
link |
02:56:59.820
we do not respect.
link |
02:57:01.020
If we can remember them and carry in us
link |
02:57:05.740
the things that we felt was the most awesome about them,
link |
02:57:10.260
right, then they live on.
link |
02:57:11.620
And I'm getting a bit emotional here,
link |
02:57:13.580
but it's a very beautiful idea you bring up there.
link |
02:57:16.140
I think we should stop this old fashioned materialism
link |
02:57:19.620
and just equate who we are with our quirks and electrons.
link |
02:57:25.220
There's no scientific basis for that really.
link |
02:57:27.820
And it's also very uninspiring.
link |
02:57:33.180
Now, if you look a little bit towards the future, right?
link |
02:57:36.980
One thing which really sucks about humans dying is that even
link |
02:57:40.740
though some of their teachings and memories and stories
link |
02:57:43.300
and ethics and so on will be copied by those around them,
link |
02:57:47.540
hopefully, a lot of it can't be copied
link |
02:57:50.260
and just dies with them, with their brain.
link |
02:57:51.980
And that really sucks.
link |
02:57:53.140
That's the fundamental reason why we find it so tragic
link |
02:57:56.860
when someone goes from having all this information there
link |
02:57:59.660
to the more just gone, ruined, right?
link |
02:58:03.460
With more post biological intelligence,
link |
02:58:07.460
that's going to shift a lot, right?
link |
02:58:10.940
The only reason it's so hard to make a backup of your brain
link |
02:58:13.980
in its entirety is exactly
link |
02:58:15.380
because it wasn't built for that, right?
link |
02:58:17.580
If you have a future machine intelligence,
link |
02:58:21.540
there's no reason for why it has to die at all.
link |
02:58:24.300
If you want to copy it, whatever it is,
link |
02:58:28.300
into some other machine intelligence,
link |
02:58:30.780
whatever it is, into some other quark blob, right?
link |
02:58:36.660
You can copy not just some of it, but all of it, right?
link |
02:58:39.540
And so in that sense,
link |
02:58:45.020
you can get immortality because all the information
link |
02:58:48.300
can be copied out of any individual entity.
link |
02:58:51.940
And it's not just mortality that will change
link |
02:58:54.220
if we get to more post biological life.
link |
02:58:56.900
It's also with that, very much the whole individualism
link |
02:59:03.180
we have now, right?
link |
02:59:04.020
The reason that we make such a big difference
link |
02:59:05.740
between me and you is exactly because
link |
02:59:09.100
we're a little bit limited in how much we can copy.
link |
02:59:10.940
Like I would just love to go like this
link |
02:59:13.300
and copy your Russian skills, Russian speaking skills.
link |
02:59:17.780
Wouldn't it be awesome?
link |
02:59:18.820
But I can't, I have to actually work for years
link |
02:59:21.980
if I want to get better on it.
link |
02:59:23.900
But if we were robots.
link |
02:59:27.940
Just copy and paste freely, then that loses completely.
link |
02:59:31.820
It washes away the sense of what immortality is.
link |
02:59:35.140
And also individuality a little bit, right?
link |
02:59:37.460
We would start feeling much more,
link |
02:59:40.620
maybe we would feel much more collaborative with each other
link |
02:59:43.540
if we can just, hey, you know, I'll give you my Russian,
link |
02:59:45.620
you can give me your Russian
link |
02:59:46.540
and I'll give you whatever,
link |
02:59:47.940
and suddenly you can speak Swedish.
link |
02:59:50.220
Maybe that's less a bad trade for you,
link |
02:59:52.060
but whatever else you want from my brain, right?
link |
02:59:54.620
And there've been a lot of sci fi stories
link |
02:59:58.060
about hive minds and so on,
link |
02:59:59.540
where people, where experiences
link |
03:00:02.140
can be more broadly shared.
link |
03:00:05.500
And I think one, we don't,
link |
03:00:08.540
I don't pretend to know what it would feel like
link |
03:00:12.140
to be a super intelligent machine,
link |
03:00:16.940
but I'm quite confident that however it feels
link |
03:00:20.420
about mortality and individuality
link |
03:00:22.420
will be very, very different from how it is for us.
link |
03:00:26.660
Well, for us, mortality and finiteness
link |
03:00:30.500
seems to be pretty important at this particular moment.
link |
03:00:34.100
And so all good things must come to an end.
link |
03:00:37.460
Just like this conversation, Max.
link |
03:00:39.100
I saw that coming.
link |
03:00:40.660
Sorry, this is the world's worst translation.
link |
03:00:44.660
I could talk to you forever.
link |
03:00:45.820
It's such a huge honor that you've spent time with me.
link |
03:00:49.100
The honor is mine.
link |
03:00:50.140
Thank you so much for getting me essentially
link |
03:00:53.380
to start this podcast by doing the first conversation,
link |
03:00:55.980
making me realize falling in love
link |
03:00:58.500
with conversation in itself.
link |
03:01:01.140
And thank you so much for inspiring
link |
03:01:03.220
so many people in the world with your books,
link |
03:01:05.380
with your research, with your talking,
link |
03:01:07.740
and with the other, like this ripple effect of friends,
link |
03:01:12.780
including Elon and everybody else that you inspire.
link |
03:01:15.460
So thank you so much for talking today.
link |
03:01:18.140
Thank you, I feel so fortunate
link |
03:01:21.540
that you're doing this podcast
link |
03:01:23.620
and getting so many interesting voices out there
link |
03:01:27.780
into the ether and not just the five second sound bites,
link |
03:01:30.940
but so many of the interviews I've watched you do.
link |
03:01:33.060
You really let people go in into depth
link |
03:01:36.140
in a way which we sorely need in this day and age.
link |
03:01:38.660
That I got to be number one, I feel super honored.
link |
03:01:41.740
Yeah, you started it.
link |
03:01:43.500
Thank you so much, Max.
link |
03:01:45.620
Thanks for listening to this conversation
link |
03:01:47.180
with Max Tegmark, and thank you to our sponsors,
link |
03:01:50.260
the Jordan Harbinger Show, For Sigmatic Mushroom Coffee,
link |
03:01:54.860
BetterHelp Online Therapy, and ExpressVPN.
link |
03:01:58.940
So the choice is wisdom, caffeine, sanity, or privacy.
link |
03:02:04.420
Choose wisely, my friends.
link |
03:02:05.820
And if you wish, click the sponsor links below
link |
03:02:08.740
to get a discount and to support this podcast.
link |
03:02:11.860
And now let me leave you with some words from Max Tegmark.
link |
03:02:15.100
If consciousness is the way that information feels
link |
03:02:18.860
when it's processed in certain ways,
link |
03:02:21.380
then it must be substrate independent.
link |
03:02:24.220
It's only the structure of information processing
link |
03:02:26.660
that matters, not the structure of the matter
link |
03:02:29.100
doing the information processing.
link |
03:02:31.900
Thank you for listening, and hope to see you next time.