back to index

Max Tegmark: AI and Physics | Lex Fridman Podcast #155


small model | large model

link |
00:00:00.000
The following is a conversation with Max Tagmark, his second time in the podcast.
link |
00:00:04.760
In fact, the previous conversation was episode number one of this very podcast.
link |
00:00:10.960
He is a physicist and artificial intelligence researcher at MIT, co founder
link |
00:00:16.800
of the Future of Life Institute and author of Life 3.0, being human
link |
00:00:22.080
in the age of artificial intelligence.
link |
00:00:24.560
He's also the head of a bunch of other huge fascinating projects and has
link |
00:00:28.760
written a lot of different things that you should definitely check out.
link |
00:00:32.200
He has been one of the key humans who has been outspoken about long term
link |
00:00:36.400
existential risks of AI and also its exciting possibilities and solutions
link |
00:00:41.240
to real world problems, most recently at the intersection of AI and physics.
link |
00:00:46.400
And also in reengineering the algorithms that divide us by controlling
link |
00:00:51.720
the information we see and thereby creating bubbles and all other kinds
link |
00:00:56.120
of complex social phenomena that we see today.
link |
00:00:59.600
In general, he's one of the most passionate and brilliant people I
link |
00:01:02.600
have the fortune of knowing.
link |
00:01:04.320
I hope to talk to him many more times on this podcast in the future.
link |
00:01:08.240
Quick mention of our sponsors, the Jordan Harbinger Show, Four
link |
00:01:12.360
Sigmatic Mushroom Coffee, BetterHelp Online Therapy and ExpressVPN.
link |
00:01:18.440
So the choice is wisdom, caffeine, sanity or privacy.
link |
00:01:23.000
Choose wisely, my friends, and if you wish, click the sponsor links
link |
00:01:26.440
below to get a discount at the support this podcast.
link |
00:01:29.920
As a side note, let me say that much of the researchers in the machine
link |
00:01:34.280
learning and artificial intelligence communities do not spend much time
link |
00:01:38.520
thinking deeply about existential risks of AI.
link |
00:01:42.120
Because our current algorithms are seen as useful but dumb, it's difficult
link |
00:01:46.160
to imagine how they may become destructive to the fabric of human
link |
00:01:49.720
civilization in the foreseeable future.
link |
00:01:51.920
I understand this mindset, but it's very troublesome to me.
link |
00:01:55.320
This is both a dangerous and uninspiring perspective, reminiscent of
link |
00:02:00.320
the lobster sitting in a pot of lukewarm water that a minute ago was cold.
link |
00:02:05.320
I feel a kinship with this lobster.
link |
00:02:07.720
I believe that already the algorithms that drive our interaction on social
link |
00:02:11.400
media have an intelligence and power that far outstrip the intelligence
link |
00:02:15.840
and power of any one human being.
link |
00:02:17.560
Now really is the time to think about this, to define the trajectory
link |
00:02:21.640
of the interplay of technology and human beings in our society.
link |
00:02:25.480
I think that the future of human civilization very well may be at
link |
00:02:29.280
stake over this very question of the role of artificial intelligence
link |
00:02:33.120
in our society.
link |
00:02:34.640
If you enjoy this thing, subscribe on YouTube, review it on Apple
link |
00:02:37.560
Podcasts, follow on Spotify, support on Patreon, or connect with me on
link |
00:02:41.600
Twitter, Alex Friedman.
link |
00:02:43.200
And now, here's my conversation with Max Tagmark.
link |
00:02:47.800
So people might not know this, but you were actually episode number
link |
00:02:51.120
one of this podcast just a couple of years ago, and now we're back.
link |
00:02:57.320
And it so happens that a lot of exciting things happened in both physics
link |
00:03:01.760
and artificial intelligence, both fields that you're super passionate about.
link |
00:03:06.560
Can we try to catch up to some of the exciting things happening in artificial
link |
00:03:11.600
intelligence, especially in the context of the way it's cracking, open the
link |
00:03:16.040
different problems of the sciences?
link |
00:03:19.400
Yeah, I'd love to, especially now as we start 2021 here, it's a really fun
link |
00:03:24.920
time to think about what were the biggest breakthroughs in AI.
link |
00:03:29.120
Not the ones necessarily the media wrote about, but they really matter.
link |
00:03:32.840
And what does that mean for our ability to do better science?
link |
00:03:36.840
What does it mean for our ability to do better science?
link |
00:03:41.080
To help people around the world?
link |
00:03:44.200
And what does it mean for new problems that they could cause if we're not
link |
00:03:49.040
smart enough to avoid them?
link |
00:03:50.200
So, you know, what do we learn basically from this?
link |
00:03:53.160
Yes, absolutely.
link |
00:03:53.880
So one of the amazing things you're part of is the AI Institute for
link |
00:03:57.920
Artificial Intelligence and Fundamental Interactions.
link |
00:04:01.920
What's up with this institute?
link |
00:04:03.480
What are you working on?
link |
00:04:04.920
What are you thinking about?
link |
00:04:05.840
Well, the idea is something I'm very on fire with, which is basically AI
link |
00:04:11.640
meets physics.
link |
00:04:13.240
And, you know, it's been almost five years now since I shifted my own MIT
link |
00:04:18.640
research from physics to machine learning.
link |
00:04:22.040
And in the beginning, I noticed a lot of my colleagues, even though they
link |
00:04:24.520
were polite about it, well, I kind of, what is Max doing?
link |
00:04:29.040
What is this weird stuff?
link |
00:04:30.280
He's lost his mind.
link |
00:04:31.280
But then, but then gradually, I, together with some colleagues, were
link |
00:04:36.680
able to persuade more and more of the other professors in our physics
link |
00:04:42.120
department to get interested in this.
link |
00:04:43.720
And now we got this amazing NSF center, so 20 million bucks for the next
link |
00:04:50.000
five years, MIT and a bunch of neighboring universities here also.
link |
00:04:54.400
And I noticed now those colleagues who were looking at me, funny, have
link |
00:04:57.720
stopped asking what the point is of this, because it's becoming more clear.
link |
00:05:03.720
And I really believe that, of course, AI can help physics a lot to do
link |
00:05:08.840
better physics, but physics can also help AI a lot, both by building better
link |
00:05:16.440
hardware.
link |
00:05:17.560
My colleague, Martin Solzacic, for example, is working on an optical chip for
link |
00:05:23.120
much faster machine learning, where the computation is done, not by moving
link |
00:05:27.160
electrons around, but by moving photons around, dramatically less energy
link |
00:05:32.400
use, faster, better.
link |
00:05:35.240
We can also help AI a lot, I think, by having a different set of tools and a
link |
00:05:43.840
different, maybe more audacious attitude.
link |
00:05:47.000
You know, AI has, to a significant extent, been an engineering discipline,
link |
00:05:52.440
where you're just trying to make things that work.
link |
00:05:54.000
And being less, more interested in maybe selling them than in figuring out
link |
00:05:57.600
exactly how they work and proving theorems about that they will always work.
link |
00:06:02.880
Contrast that with physics, you know, when Elon Musk sends a rocket to the
link |
00:06:07.840
International Space Station, they didn't just train with machine learning,
link |
00:06:11.840
oh, let's fire it a little bit left, more to the left, a bit more to the right,
link |
00:06:14.480
so that also missed, let's try here, no, you know, we figured out Newton's
link |
00:06:19.200
laws of gravitation and other things and got a really deep fundamental
link |
00:06:24.480
understanding, and that's what gives us such confidence in rockets.
link |
00:06:30.680
And my vision is that in the future, all machine learning systems that
link |
00:06:37.480
actually have impact on people's lives will be understood at a really,
link |
00:06:41.440
really deep level, right, so we trust them, not because some sales rep told
link |
00:06:45.440
us to, but because they've earned our trust, and really safety critical
link |
00:06:51.440
things even prove that they will always do what we expect them to do.
link |
00:06:55.440
That's very much the physics mindset, so it's interesting if you look at
link |
00:06:59.440
big breakthroughs that have happened in machine learning this year, you know,
link |
00:07:03.440
from dancing robots, you know, is pretty fantastic, not just because it's cool,
link |
00:07:09.440
but if you just think about not that many years ago, this YouTube video at
link |
00:07:14.440
this DARPA challenge where the MIT robot comes out of the car and face
link |
00:07:18.440
plants, how far we've come in just a few years.
link |
00:07:23.440
Similarly, Alpha Fold 2, you know, crushing the protein folding problem,
link |
00:07:30.440
we can talk more about implications for medical research and stuff,
link |
00:07:34.440
but hey, you know, that's huge progress.
link |
00:07:38.440
You can look at the GPT3, they can spout off English texts,
link |
00:07:46.440
which sometimes really, really blows you away.
link |
00:07:49.440
You can look at the Google, at DeepMind's Mu Zero,
link |
00:07:53.440
which doesn't just kick our butt and go and chest and chogi,
link |
00:07:58.440
but also in all these Atari games, and you don't even have to teach it
link |
00:08:01.440
the rules now.
link |
00:08:02.440
You know, what all of those have in common is besides being powerful is
link |
00:08:07.440
we don't fully understand how they work.
link |
00:08:10.440
And that's fine if it's just some dancing robots,
link |
00:08:13.440
and the worst thing that can happen is they face plant, right?
link |
00:08:16.440
Or if they're playing Go, and the worst thing that can happen is
link |
00:08:19.440
that they make a bad move and lose the game, right?
link |
00:08:22.440
It's less fine if that's what's controlling your self driving car
link |
00:08:26.440
or your nuclear power plant.
link |
00:08:28.440
And we've seen already that even though Hollywood had all these movies
link |
00:08:34.440
where they try to make us worry about the wrong things like machines
link |
00:08:37.440
turning evil, the actual bad things that have happened with automation
link |
00:08:42.440
have not been machines turning evil.
link |
00:08:45.440
They've been caused by over trust in things we didn't understand
link |
00:08:49.440
as well as we thought we did, right?
link |
00:08:51.440
Even very simple automated systems like what Boeing put into the 737 Max, right?
link |
00:08:58.440
Yes.
link |
00:08:59.440
Killed a lot of people.
link |
00:09:00.440
Was it that that little simple system was evil?
link |
00:09:02.440
Of course not, but we didn't understand it
link |
00:09:05.440
as well as we should have, right?
link |
00:09:07.440
And we trusted without understanding.
link |
00:09:10.440
Exactly.
link |
00:09:11.440
And over trust.
link |
00:09:12.440
We didn't even understand that we didn't understand, right?
link |
00:09:15.440
The humility is really at the core of being a scientist.
link |
00:09:19.440
I think step one, if you want to be a scientist,
link |
00:09:21.440
is don't ever fool yourself into thinking you understand things
link |
00:09:24.440
when you actually don't, right?
link |
00:09:26.440
That's probably good advice for humans in general.
link |
00:09:29.440
I think humility in general can do us good.
link |
00:09:31.440
In science, it's so spectacular.
link |
00:09:33.440
Why did we have the wrong theory of gravity ever from Aristotle
link |
00:09:37.440
onward and close to Galileo's time?
link |
00:09:40.440
Why would we believe something so dumb
link |
00:09:42.440
as that if I throw this water bottle,
link |
00:09:44.440
it's going to go up with constant speed
link |
00:09:47.440
until it realizes that its natural motion is down.
link |
00:09:49.440
It changes its mind.
link |
00:09:51.440
Because people just kind of assumed
link |
00:09:54.440
Aristotle was right.
link |
00:09:55.440
He's an authority.
link |
00:09:56.440
We understand that.
link |
00:09:57.440
Why did we believe things like that the sun is going around the Earth?
link |
00:10:01.440
Why did we believe that time flows at the same rate
link |
00:10:04.440
for everyone until Einstein?
link |
00:10:06.440
Same exact mistake over and over again.
link |
00:10:08.440
We just weren't humble enough to acknowledge
link |
00:10:11.440
that we actually didn't know for sure.
link |
00:10:13.440
We assumed we knew.
link |
00:10:15.440
So we didn't discover the truth
link |
00:10:17.440
because we assumed there was nothing there to be discovered, right?
link |
00:10:20.440
There was something to be discovered about the 737 Max.
link |
00:10:24.440
And if you had been a bit more suspicious
link |
00:10:26.440
and tested it better, we would have found it.
link |
00:10:28.440
And it's the same thing with most harm
link |
00:10:30.440
that's been done by automation so far, I would say.
link |
00:10:33.440
Did you hear of a company called Night Capital?
link |
00:10:36.440
No.
link |
00:10:37.440
So good.
link |
00:10:38.440
That means you didn't invest in them earlier.
link |
00:10:41.440
They deployed this automated rating system.
link |
00:10:44.440
Yes.
link |
00:10:45.440
All nice and shiny.
link |
00:10:46.440
They didn't understand it as well as they thought.
link |
00:10:49.440
And it went about losing 10 million bucks per minute
link |
00:10:52.440
for 44 minutes straight.
link |
00:10:54.440
No.
link |
00:10:56.440
Until someone presumably was like, oh, no, shut this up.
link |
00:10:58.440
You know, was it evil?
link |
00:11:00.440
No.
link |
00:11:01.440
It was, again, misplaced trust,
link |
00:11:03.440
something they didn't fully understand, right?
link |
00:11:05.440
And there have been so many,
link |
00:11:08.440
even when people have been killed by robots,
link |
00:11:10.440
it's just quite rare still.
link |
00:11:12.440
But in factory accidents,
link |
00:11:14.440
it's in every single case been not malice,
link |
00:11:17.440
just that the robot didn't understand that,
link |
00:11:19.440
hey, a human is different from an auto part or whatever.
link |
00:11:22.440
So this is where I think there's so much opportunity
link |
00:11:27.440
for a physics approach,
link |
00:11:29.440
where you just aim for a higher level of understanding.
link |
00:11:33.440
And if you look at all these systems that we talked about
link |
00:11:37.440
from reinforcement learning systems and dancing robots
link |
00:11:42.440
to all these neural networks that power GPT3
link |
00:11:46.440
and go playing software stuff,
link |
00:11:49.440
they're all basically black boxes,
link |
00:11:52.440
much like not so different from if you teach a human something,
link |
00:11:55.440
you have no idea how their brain works, right?
link |
00:11:57.440
Except the human brain at least has been error corrected
link |
00:12:01.440
during many, many centuries of evolution
link |
00:12:04.440
in a way that some of these systems have not, right?
link |
00:12:07.440
And my MIT research is entirely focused on
link |
00:12:10.440
demystifying this black box.
link |
00:12:12.440
Intelligible intelligence is my slogan.
link |
00:12:15.440
That's a good line.
link |
00:12:16.440
Intelligible intelligence.
link |
00:12:18.440
Yeah, it's not that we shouldn't settle for something
link |
00:12:20.440
that seems intelligent,
link |
00:12:21.440
but it should be intelligible
link |
00:12:23.440
so that we actually trust it because we understand it, right?
link |
00:12:26.440
Like, again, Elon trusts his rockets
link |
00:12:28.440
because he understands Newton's laws
link |
00:12:30.440
and thrusts and how everything works.
link |
00:12:33.440
And let me tell you,
link |
00:12:34.440
can I tell you why I'm optimistic about this?
link |
00:12:36.440
Yes.
link |
00:12:37.440
I think we've made a bit of a mistake
link |
00:12:41.440
where some people still think
link |
00:12:43.440
that somehow we're never going to understand neural networks.
link |
00:12:47.440
And we're just going to have to learn to live with this.
link |
00:12:49.440
It's this very powerful black box.
link |
00:12:51.440
Basically, for those who haven't spent time
link |
00:12:55.440
building their own,
link |
00:12:56.440
it's super simple what happens inside.
link |
00:12:58.440
You send in a long list of numbers,
link |
00:13:00.440
and then you do a bunch of operations on them,
link |
00:13:04.440
multiply by matrices, et cetera, et cetera,
link |
00:13:06.440
and some other numbers come out.
link |
00:13:07.440
That's the output of it.
link |
00:13:09.440
And then there are a bunch of knobs you can tune.
link |
00:13:13.440
And when you change them,
link |
00:13:14.440
it affects the computation, the input output relation.
link |
00:13:17.440
And then you just give the computer some definition of good,
link |
00:13:20.440
and it keeps optimizing these knobs
link |
00:13:22.440
until it performs as good as possible.
link |
00:13:24.440
And often you go, like, wow, that's really good.
link |
00:13:26.440
This robot can dance.
link |
00:13:28.440
Or this machine is beating me at chest now.
link |
00:13:31.440
And in the end, you have something,
link |
00:13:33.440
which even though you can look inside it,
link |
00:13:35.440
you have very little idea of how it works.
link |
00:13:38.440
You can print out tables of all the millions of parameters in there.
link |
00:13:42.440
Is it crystal clear now how it's working?
link |
00:13:44.440
And of course not, right?
link |
00:13:46.440
Many of my colleagues seem willing to settle for that.
link |
00:13:48.440
And I'm like, no.
link |
00:13:50.440
That's like the halfway point.
link |
00:13:53.440
Some have even gone as far as sort of guessing
link |
00:13:57.440
that the mystery, the inscrutability of this
link |
00:14:00.440
is where some of the power comes from
link |
00:14:02.440
and some sort of mysticism.
link |
00:14:04.440
I think that's total nonsense.
link |
00:14:06.440
I think the real power of neural networks
link |
00:14:10.440
comes not from inscrutability,
link |
00:14:12.440
but from differentiability.
link |
00:14:14.440
And what I mean by that is simply that
link |
00:14:18.440
the output changes only smoothly
link |
00:14:21.440
if you tweak your knobs.
link |
00:14:23.440
And then you can use all these powerful methods
link |
00:14:26.440
we have for optimization in science.
link |
00:14:28.440
We can just tweak them a little bit
link |
00:14:29.440
and see, did that get better or worse?
link |
00:14:31.440
That's the fundamental idea of machine learning,
link |
00:14:33.440
that the machine itself can keep optimizing
link |
00:14:35.440
until it gets better.
link |
00:14:37.440
Suppose you wrote this algorithm instead
link |
00:14:40.440
in Python or some other programming language.
link |
00:14:43.440
And then what the knobs did was
link |
00:14:45.440
they just changed random letters in your code.
link |
00:14:48.440
Now it would just epically fail, right?
link |
00:14:51.440
You change one thing and instead of saying print,
link |
00:14:53.440
it says synth, syntax error.
link |
00:14:56.440
You don't even know, was that for the better
link |
00:14:58.440
or for the worse, right?
link |
00:15:00.440
This to me is, this is what I believe
link |
00:15:02.440
is the fundamental power of neural networks.
link |
00:15:05.440
Just to clarify, the changing the different letters
link |
00:15:07.440
in a program would not be a differentiable process.
link |
00:15:10.440
It would make it an invalid program typically
link |
00:15:13.440
and then you wouldn't even know
link |
00:15:15.440
if you changed more letters,
link |
00:15:16.440
if it would make it work again, right?
link |
00:15:18.440
So that's the magic of neural networks,
link |
00:15:21.440
the inscrubility.
link |
00:15:23.440
The differentiability, that every setting
link |
00:15:25.440
of the parameters is a program
link |
00:15:27.440
and you can tell is it better or worse, right?
link |
00:15:29.440
So you don't like the poetry of the mystery
link |
00:15:32.440
of neural networks as the source of its power?
link |
00:15:34.440
I generally like poetry, but...
link |
00:15:37.440
Not in this case.
link |
00:15:38.440
It's so misleading and above all,
link |
00:15:41.440
it shortchanges us, it makes us underestimate
link |
00:15:44.440
the good things we can accomplish
link |
00:15:46.440
because so what we've been doing in my group
link |
00:15:48.440
is basically step one,
link |
00:15:50.440
train the mysterious neural network
link |
00:15:52.440
to do something well.
link |
00:15:54.440
And then step two, do some additional AI techniques
link |
00:15:58.440
to see if we can now transform this black box
link |
00:16:02.440
into something equally intelligent
link |
00:16:04.440
that you can actually understand.
link |
00:16:06.440
So for example, I'll give you one example
link |
00:16:08.440
of this AI Feynman project that we just published, right?
link |
00:16:11.440
So we took the 100 most famous
link |
00:16:15.440
or complicated equations
link |
00:16:17.440
from one of my favorite physics textbooks,
link |
00:16:20.440
in fact the one that got me into physics
link |
00:16:22.440
in the first place, the Feynman lectures on physics.
link |
00:16:25.440
And so you have a formula, you know,
link |
00:16:28.440
maybe it has what goes into the formula
link |
00:16:31.440
as six different variables
link |
00:16:33.440
and then what comes out as one.
link |
00:16:35.440
So then you can make like a giant Excel spreadsheet
link |
00:16:37.440
with seven columns.
link |
00:16:39.440
You put in just random numbers for the six columns
link |
00:16:41.440
for those six input variables
link |
00:16:43.440
and then you calculate with the formula
link |
00:16:45.440
the seventh column, the output.
link |
00:16:47.440
So maybe it's like the force equals
link |
00:16:49.440
in the last column some function of the other.
link |
00:16:51.440
And now the task is, okay,
link |
00:16:53.440
if I don't tell you what the formula was,
link |
00:16:55.440
can you figure that out
link |
00:16:57.440
from looking at my spreadsheet that I gave you?
link |
00:16:59.440
This problem is called symbolic regression.
link |
00:17:03.440
If I tell you that the formula
link |
00:17:05.440
is what we call a linear formula.
link |
00:17:07.440
So it's just that the output is
link |
00:17:11.440
some sum of all the things
link |
00:17:13.440
input at the time, some constants.
link |
00:17:15.440
That's the famous easy problem we can solve.
link |
00:17:17.440
We do it all the time in science and engineering.
link |
00:17:20.440
But the general one,
link |
00:17:22.440
if it's more complicated functions
link |
00:17:24.440
with logarithms or cosines or other math,
link |
00:17:27.440
it's a very, very hard one
link |
00:17:29.440
and probably impossible to do fast in general
link |
00:17:32.440
just because the number of formulas
link |
00:17:34.440
with n symbols just grows exponentially.
link |
00:17:37.440
Just like the number of passwords you can make
link |
00:17:39.440
grow dramatically with length.
link |
00:17:41.440
So we had this idea that
link |
00:17:44.440
if you first have a neural network
link |
00:17:46.440
that can actually approximate the formula,
link |
00:17:48.440
you just train that even if you don't understand how it works,
link |
00:17:51.440
that can be the first step
link |
00:17:54.440
towards actually understanding how it works.
link |
00:17:56.440
So that's what we do first.
link |
00:17:59.440
And then we study that neural network now
link |
00:18:02.440
and put in all sorts of other data
link |
00:18:04.440
that wasn't in the original training data
link |
00:18:06.440
and use that to discover
link |
00:18:08.440
simplifying properties of the formula.
link |
00:18:10.440
And that lets us break it apart
link |
00:18:12.440
often into many simpler pieces
link |
00:18:14.440
in a kind of divide and conquer approach.
link |
00:18:16.440
So we were able to solve all of those 100 formulas,
link |
00:18:19.440
discover them automatically,
link |
00:18:21.440
plus a whole bunch of other ones.
link |
00:18:23.440
But it's actually kind of humbling to say
link |
00:18:26.440
that this code, which anyone who wants now
link |
00:18:29.440
is listening to this can type pip install
link |
00:18:32.440
AI Feynman on the computer and run it.
link |
00:18:34.440
It can actually do what Johannes Kepler
link |
00:18:37.440
spent four years doing when he stared at Mars data
link |
00:18:40.440
until he was like, finally, Eureka, this is an ellipse.
link |
00:18:43.440
This will do it automatically for you in one hour, right?
link |
00:18:46.440
Or Max Planck.
link |
00:18:48.440
He was looking at how much radiation comes out
link |
00:18:51.440
at different wavelengths from a hot object
link |
00:18:53.440
and discovered the famous black body formula.
link |
00:18:56.440
This discovers it automatically.
link |
00:18:59.440
I'm actually excited about
link |
00:19:04.440
seeing if we can discover not just old formulas again,
link |
00:19:08.440
but new formulas that no one has seen before.
link |
00:19:11.440
And do you like this process of using kind of a neural network
link |
00:19:14.440
to find some basic insights
link |
00:19:17.440
and then dissecting the neural network
link |
00:19:19.440
and gain the final so that that's in that way
link |
00:19:23.440
you've forcing the explainability issue,
link |
00:19:29.440
really trying to analyze the neural network
link |
00:19:33.440
for the things it knows in order to come up
link |
00:19:36.440
with the final beautiful simple theory
link |
00:19:39.440
underlying the initial system that you were looking at.
link |
00:19:42.440
I love that.
link |
00:19:44.440
And the reason I'm so optimistic that it can be generalized
link |
00:19:48.440
is because that's exactly what we do as human scientists.
link |
00:19:53.440
Think of Galileo whom we mentioned, right?
link |
00:19:55.440
I bet when he was a little kid,
link |
00:19:57.440
if his dad threw him an apple, he would catch it.
link |
00:20:00.440
Why?
link |
00:20:02.440
Because he had a neural network in his brain
link |
00:20:04.440
that he had trained to predict the parabolic orbit
link |
00:20:07.440
of apples that are thrown under gravity.
link |
00:20:10.440
If you throw a tennis ball to a dog,
link |
00:20:12.440
it also has this same ability of deep learning
link |
00:20:15.440
to figure out how the ball is going to move and catch it.
link |
00:20:18.440
But Galileo went one step further when he got older.
link |
00:20:21.440
He went back and was like, wait a minute.
link |
00:20:25.440
I can write down a formula for this.
link |
00:20:27.440
Y equals X squared, a parabola.
link |
00:20:31.440
And he helped revolutionize physics as we know it, right?
link |
00:20:36.440
So there was a basic neural network in there from childhood
link |
00:20:39.440
that captured the experiences of observing
link |
00:20:44.440
different kinds of trajectories.
link |
00:20:46.440
And then he was able to go back in with another extra little neural network
link |
00:20:50.440
and analyze all those experiences and be like, wait a minute.
link |
00:20:54.440
There's a deeper rule here.
link |
00:20:56.440
Exactly. He was able to distill out in symbolic form
link |
00:21:00.440
what that complicated black box neural network was doing.
link |
00:21:03.440
Not only did he, the formula he got,
link |
00:21:06.440
ultimately become more accurate.
link |
00:21:08.440
And similarly, this is how Newton got Newton's laws,
link |
00:21:11.440
which is why Elon can send rockets to the space station now, right?
link |
00:21:15.440
So it's not only more accurate, but it's also simpler, much simpler.
link |
00:21:19.440
And it's so simple that we can actually describe it to our friends
link |
00:21:23.440
and each other, right?
link |
00:21:25.440
We've talked about it just in the context of physics now,
link |
00:21:28.440
but hey, isn't this what we're doing when we're talking to each other also?
link |
00:21:32.440
We go around with our neural networks just like dogs and cats
link |
00:21:36.440
and chipmunks and blue jays.
link |
00:21:38.440
And we experience things in the world.
link |
00:21:41.440
But then we humans do this additional step on top of that,
link |
00:21:44.440
where we then distill out certain high level knowledge
link |
00:21:48.440
that we've extracted from this in a way that can communicate it to each other
link |
00:21:52.440
in a symbolic form in English in this case, right?
link |
00:21:56.440
So if we can do it and we believe that we are information processing entities,
link |
00:22:02.440
then we should be able to make machine learning that does it also.
link |
00:22:06.440
Well, do you think the entire thing could be learning?
link |
00:22:09.440
Because this dissection process, like for AI Feynman,
link |
00:22:13.440
the secondary stage feels like something like reasoning.
link |
00:22:18.440
And the initial step feels like more like the more basic kind of differentiable learning.
link |
00:22:24.440
Do you think the whole thing could be differentiable learning?
link |
00:22:28.440
Do you think the whole thing could be basically neural networks on top of each other?
link |
00:22:31.440
It's like turtles all the way down.
link |
00:22:33.440
Could it be neural networks all the way down?
link |
00:22:35.440
I mean, that's a really interesting question.
link |
00:22:37.440
We know that in your case, it is neural networks all the way down
link |
00:22:40.440
because that's all you'll have in your skull as a bunch of neurons doing their thing, right?
link |
00:22:45.440
But if you ask the question more generally,
link |
00:22:49.440
what algorithms are being used in your brain,
link |
00:22:53.440
I think it's super interesting to compare.
link |
00:22:55.440
I think we've gotten a little bit backwards historically
link |
00:22:58.440
because we humans first discovered good old fashioned AI,
link |
00:23:02.440
the logic based AI that we often call GOFI for good old fashioned AI.
link |
00:23:08.440
And then more recently, we did machine learning
link |
00:23:12.440
because it required bigger computers, so we had to discover it later.
link |
00:23:15.440
So we think of machine learning with neural networks as the modern thing
link |
00:23:20.440
and the logic based AI as the old fashioned thing.
link |
00:23:23.440
But if you look at evolution on Earth, right,
link |
00:23:27.440
it's actually been the other way around.
link |
00:23:29.440
I would say that, for example, an eagle has a better vision system
link |
00:23:35.440
than I have using, and dogs are just as good at casting tennis balls as I am.
link |
00:23:42.440
All this stuff which is done by training in neural network
link |
00:23:45.440
and not interpreting it in words, you know,
link |
00:23:49.440
is something so many of our animal friends can do, at least as well as us, right?
link |
00:23:53.440
What is it that we humans can do that the chipmunks and the eagles cannot?
link |
00:23:58.440
It's more to do with this logic based stuff, right,
link |
00:24:01.440
where we can extract out information in symbols, in language,
link |
00:24:07.440
and now even with equations if you're a scientist, right?
link |
00:24:11.440
So basically what happened was first we built these computers
link |
00:24:14.440
that could multiply numbers real fast and manipulate symbols
link |
00:24:17.440
and we felt they were pretty dumb.
link |
00:24:19.440
And then we made neural networks that can see as well as a cat can
link |
00:24:24.440
and do a lot of this inscrutable black box neural networks.
link |
00:24:29.440
What we humans can do also is put the two together in a useful way.
link |
00:24:33.440
Yes, in our own brain.
link |
00:24:35.440
Yes, in our own brain.
link |
00:24:37.440
So if we ever want to get artificial general intelligence
link |
00:24:40.440
that can do all jobs as well as humans can, right,
link |
00:24:44.440
then that's what's going to be required to be able to combine the neural networks with symbolic.
link |
00:24:52.440
Combine the old AI with a new AI in a good way.
link |
00:24:55.440
We do it in our brains and there seems to be basically two strategies I see in industry now.
link |
00:25:00.440
One scares the heebie jeebies out of me and the other one I find much more encouraging.
link |
00:25:05.440
Can we break them apart? Which other two?
link |
00:25:09.440
The one that scares the heebie jeebies out of me is this attitude
link |
00:25:12.440
that we're just going to make ever bigger systems that we still don't understand
link |
00:25:15.440
until they can be as smart as humans.
link |
00:25:18.440
What could possibly go wrong?
link |
00:25:21.440
I think it's just such a reckless thing to do and unfortunately,
link |
00:25:25.440
and if we actually succeed as a species to build artificial general intelligence,
link |
00:25:29.440
then we still have no clue how it works.
link |
00:25:31.440
I think at least 50% chance we're going to be extinct before too long.
link |
00:25:36.440
It's just going to be an utter epic own goal.
link |
00:25:40.440
Plus that 44 minute losing money problem or like the paperclip problem
link |
00:25:46.440
where we don't understand how it works and it's just in a matter of seconds runs away
link |
00:25:51.440
in some kind of direction that's going to be very problematic.
link |
00:25:54.440
Even long before you have to worry about the machines themselves
link |
00:25:58.440
somehow deciding to do things and to us that we have to worry about people using machines.
link |
00:26:06.440
They're short of AI, AGI and power to do bad things.
link |
00:26:09.440
I mean, just take a moment and if anyone who's not worried particularly about advanced AI,
link |
00:26:17.440
just take 10 seconds and just think about your least favorite leader on the planet right now.
link |
00:26:23.440
Don't tell me who it is.
link |
00:26:24.440
I want to keep this apolitical.
link |
00:26:26.440
But just see the face in front of you, that person for 10 seconds.
link |
00:26:30.440
Now imagine that that person has this incredibly powerful AI under their control
link |
00:26:36.440
and can use it to impose their will on the whole planet.
link |
00:26:39.440
How does that make you feel?
link |
00:26:42.440
Yeah.
link |
00:26:44.440
Can we break that apart just briefly?
link |
00:26:49.440
For the 50% chance that we'll run into trouble with this approach,
link |
00:26:53.440
do you see the bigger worry in that leader or humans using the system to do damage
link |
00:27:00.440
or are you more worried and I think I'm in this camp
link |
00:27:05.440
more worried about accidental unintentional destruction of everything.
link |
00:27:10.440
So humans trying to do good and in a way where everyone agrees it's kind of good,
link |
00:27:17.440
it's just they're trying to do good without understanding.
link |
00:27:19.440
Because I think every evil leader in history thought they're, to some degree,
link |
00:27:24.440
thought they're trying to do good.
link |
00:27:25.440
Oh yeah.
link |
00:27:26.440
I'm sure Hitler thought he was doing a good job.
link |
00:27:29.440
I've been reading a lot about Stalin.
link |
00:27:31.440
He legitimately thought that communism was good for the world
link |
00:27:36.440
and that he was doing good.
link |
00:27:37.440
I think Mao Zedong thought what he was doing with a great leap forward was good too.
link |
00:27:41.440
I'm actually concerned about both of those.
link |
00:27:45.440
Before, I promised to answer this in detail, but before we do that,
link |
00:27:49.440
let me finish answering the first question,
link |
00:27:51.440
because I told you that there were two different routes
link |
00:27:53.440
we could get to artificial general intelligence and one scares the FPGVs out of me,
link |
00:27:57.440
which is this one where we build something,
link |
00:27:59.440
we just say bigger neural networks, ever more hardware,
link |
00:28:02.440
and it's just trying to get more data and poof, now it's very powerful.
link |
00:28:07.440
That, I think, is the most unsafe and reckless approach.
link |
00:28:11.440
The alternative to that is the intelligible intelligence approach instead,
link |
00:28:17.440
where we say neural networks is just a tool for the first step to get the intuition,
link |
00:28:26.440
but then we're going to spend also serious resources on other AI techniques
link |
00:28:33.440
for demystifying this black box and figuring out what it's actually doing
link |
00:28:37.440
so we can convert it into something that's equally intelligent,
link |
00:28:41.440
but that we actually understand what it's doing.
link |
00:28:44.440
Maybe we can even prove theorems about it,
link |
00:28:46.440
that this car here will never be hacked when it's driving,
link |
00:28:50.440
because here is a proof.
link |
00:28:53.440
There is a whole science of this, but it doesn't work for neural networks.
link |
00:28:56.440
There are big black boxes, but it works well and certain other kinds of codes.
link |
00:29:02.440
That approach, I think, is much more promising.
link |
00:29:05.440
That's exactly why I'm working on it, frankly,
link |
00:29:07.440
not just because I think it's cool for science,
link |
00:29:09.440
but because I think the more we understand these systems,
link |
00:29:14.440
the better the chances that we can make them do the things that are good for us
link |
00:29:18.440
that are actually intended, not unintended.
link |
00:29:21.440
You think it's possible to prove things about something as complicated as a neural network?
link |
00:29:27.440
That's the hope?
link |
00:29:28.440
Well, ideally, there's no reason there has to be a neural network in the end, either.
link |
00:29:34.440
We discovered that Newton's laws of gravity with neural network in Newton's head,
link |
00:29:39.440
but that's not the way it's programmed into the navigation system of Elon Musk's rocket anymore.
link |
00:29:46.440
It's written in C++, or I don't know what language he uses exactly.
link |
00:29:50.440
And then there are software tools called symbolic verification.
link |
00:29:53.440
DARPA and the US military has done a lot of really great research on this,
link |
00:29:59.440
because they really want to understand that when they build weapon systems,
link |
00:30:03.440
they don't just go fire at random or malfunction, right?
link |
00:30:06.440
And there's even a whole operating system called Cell 3 that's been developed by a DARPA grant
link |
00:30:12.440
where you can actually mathematically prove that this thing can never be hacked.
link |
00:30:17.440
Well, one day, I hope that will be something you can say about the OS that's running on our laptops, too,
link |
00:30:24.440
as you know, but we're not there.
link |
00:30:26.440
But I think we should be ambitious, frankly.
link |
00:30:29.440
And if we can use machine learning to help do the proofs and so on as well, right,
link |
00:30:35.440
then it's much easier to verify that a proof is correct than to come up with a proof in the first place.
link |
00:30:42.440
That's really the core idea here.
link |
00:30:44.440
If someone comes on your podcast and says they proved the Riemann hypothesis
link |
00:30:49.440
or some sensational new theorem, it's much easier for someone else to take some smart math grad students
link |
00:30:59.440
and check, oh, there's an error here on equation 5, or this really checks out
link |
00:31:03.440
than it was to discover the proof.
link |
00:31:06.440
Yeah, although some of those proofs are pretty complicated, but yes, it's still nevertheless much easier to verify the proof.
link |
00:31:12.440
I love the optimism.
link |
00:31:14.440
You know, even with the security of systems, there's a kind of cynicism that pervades people who think about this,
link |
00:31:22.440
which is like, oh, it's hopeless.
link |
00:31:24.440
I mean, in the same sense, exactly like you're saying when you own networks,
link |
00:31:27.440
oh, it's hopeless to understand what's happening.
link |
00:31:29.440
With security, people are just like, well, there's always going to be attack vectors
link |
00:31:37.440
and waste to attack the system.
link |
00:31:40.440
But you're right, we're just very new with these computational systems.
link |
00:31:43.440
We're even new with these intelligence systems, and it's not out of the realm of possibility.
link |
00:31:49.440
Just like people that understand the movement of the stars and the planets and so on.
link |
00:31:53.440
Yeah.
link |
00:31:54.440
It's entirely possible that within, hopefully soon, but it could be within 100 years,
link |
00:31:59.440
we start to have an obvious laws of gravity about intelligence.
link |
00:32:04.440
Yeah.
link |
00:32:05.440
And God forbid, well, consciousness too, that one.
link |
00:32:10.440
Agreed.
link |
00:32:11.440
You know, I think, of course, if you're selling computers that get hacked a lot,
link |
00:32:15.440
that's in your interest as a company that people think it's impossible to make it safe.
link |
00:32:19.440
So, you know, but he's going to get the idea of suing you.
link |
00:32:21.440
But I want to really inject optimism here.
link |
00:32:23.440
It's absolutely possible to do much better than we're doing now.
link |
00:32:30.440
And you know, your laptop does so much stuff.
link |
00:32:34.440
You don't need the music player to be super safe in your future self driving car, right?
link |
00:32:41.440
If someone hacks it and starts playing music, you don't like the world on end.
link |
00:32:47.440
But what you can do is you can break out and say the drive computer that controls your safety
link |
00:32:52.440
must be completely physically decoupled entirely from the entertainment system.
link |
00:32:57.440
And it must physically be such that it can't take on over the air updates while you're driving.
link |
00:33:02.440
And it can be, it can have, it's not that, it can have ultimately some operating system on it,
link |
00:33:09.440
which is symbolically verified and proven that it's always going to do what it's supposed to do, right?
link |
00:33:17.440
We can basically have, and companies should take that attitude too.
link |
00:33:20.440
They should look at everything they do and say, what are the few systems in our company
link |
00:33:24.440
that threaten the whole life of the company if they get hacked, you know,
link |
00:33:28.440
and have the highest standards for them, and then they can save money
link |
00:33:32.440
by going for the El Chippo poorly understood stuff for the rest, you know.
link |
00:33:36.440
This is very feasible, I think.
link |
00:33:38.440
And coming back to the bigger question about, that you worried about,
link |
00:33:42.440
that there'll be unintentional failures, I think, there are two quite separate risks here, right?
link |
00:33:47.440
We talked a lot about one of them, which is that the goals are noble of the human.
link |
00:33:52.440
The human says, I want this airplane to not crash,
link |
00:33:56.440
because this is not Muhammad Atta now flying the airplane, right?
link |
00:34:00.440
And now there's this technical challenge of making sure that the autopilot
link |
00:34:05.440
is actually going to behave as the pilot wants.
link |
00:34:10.440
If you set that aside, there's also the separate question.
link |
00:34:13.440
How do you make sure that the goals of the pilot are actually aligned with the goals of the passenger?
link |
00:34:19.440
How do you make sure very much more broadly that if we can all agree as a species
link |
00:34:24.440
that we would like things to kind of go well for humanity as a whole,
link |
00:34:27.440
that the goals are aligned here, the alignment problem.
link |
00:34:31.440
And yeah, there's been a lot of progress in the sense that there's suddenly huge amounts of research going on about it.
link |
00:34:41.440
I'm very grateful to Elon Musk for giving us that money five years ago,
link |
00:34:44.440
so we could launch the first research program on technical AI safety and alignment.
link |
00:34:49.440
There's a lot of stuff happening.
link |
00:34:51.440
I think we need to do more than just make sure little machines do always what their owners do.
link |
00:34:57.440
That wouldn't have prevented September 11.
link |
00:35:00.440
Muhammad Atta said, OK, autopilot, please fly into World Trade Center.
link |
00:35:06.440
And it's like, OK, that even happened.
link |
00:35:10.440
In a different situation, there was this depressed pilot named Andreas Lubitz,
link |
00:35:15.440
who told his German wings passenger jet to fly into the Alps.
link |
00:35:18.440
He just told the computer to change the altitude to 100 meters or something like that.
link |
00:35:23.440
And you know what the computer said?
link |
00:35:25.440
OK.
link |
00:35:26.440
And it had the frigging topographical map of the Alps in there.
link |
00:35:29.440
It had GPS, everything.
link |
00:35:31.440
No one had bothered teaching it even the basic kindergarten ethics of, like, no.
link |
00:35:36.440
We never want airplanes to fly into mountains under any circumstances.
link |
00:35:42.440
And so we have to think beyond just the technical issues
link |
00:35:49.440
and think about how do we align, in general, incentives on this planet for the greater good.
link |
00:35:54.440
So starting with simple stuff like that, every airplane that has a computer in it
link |
00:35:58.440
should be taught whatever kindergarten ethics it's smart enough to understand.
link |
00:36:03.440
Like, no, don't fly into fixed objects if the pilot tells you to do so.
link |
00:36:08.440
And then go on autopilot mode, send an email to the cops
link |
00:36:13.440
and land at the latest airport, nearest airport.
link |
00:36:16.440
Any car with a forward facing camera should just be programmed by the manufacturers
link |
00:36:22.440
so that it will never accelerate into a human ever.
link |
00:36:26.440
That would avoid things like the niece attack
link |
00:36:30.440
and many horrible terrorist vehicle attacks where they deliberately did that, right?
link |
00:36:35.440
There's not some sort of thing, oh, you know, US and China, different views.
link |
00:36:39.440
No, there was not a single car manufacturer in the world who wanted the cars to do this.
link |
00:36:45.440
They just hadn't thought to do the alignment.
link |
00:36:47.440
And if you look at, more broadly, problems that happen on this planet,
link |
00:36:52.440
the vast majority have to do a poor alignment.
link |
00:36:55.440
I mean, think about, let's go back really big, because I know you're so good at that.
link |
00:37:01.440
So long ago in evolution, we had these genes and they wanted to make copies of themselves.
link |
00:37:07.440
That's really all they cared about.
link |
00:37:09.440
So some genes said, hey, I'm going to build a brain on this body I'm in
link |
00:37:15.440
so that I can get better at making copies to myself.
link |
00:37:18.440
And then they decided for their benefit to get copied more,
link |
00:37:22.440
to align your brain's incentives with their incentives.
link |
00:37:25.440
So it didn't want you to starve to death.
link |
00:37:29.440
So it gave you an incentive to eat.
link |
00:37:32.440
And it wanted you to make copies of the genes.
link |
00:37:36.440
So it gave you an incentive to fall in love and do all sorts of naughty things
link |
00:37:41.440
to make copies of itself, right?
link |
00:37:44.440
So that was successful value alignment done on the genes.
link |
00:37:48.440
They created something more intelligent than themselves,
link |
00:37:51.440
but they made sure to try to align the values.
link |
00:37:53.440
But then something went a little bit wrong against the idea of what the genes wanted
link |
00:37:59.440
because a lot of humans discovered, hey, we really like this business about sex
link |
00:38:05.440
that the genes have made us enjoy, but we don't want to have babies right now.
link |
00:38:09.440
So we're going to hack the genes and use birth control.
link |
00:38:14.440
And I really feel like drinking a Coca Cola right now,
link |
00:38:19.440
but I don't want to get a potbelly, so I'm going to drink Diet Coke.
link |
00:38:22.440
We have all these things we've figured out because we're smarter than the genes,
link |
00:38:26.440
how we can actually subvert their intentions.
link |
00:38:29.440
So it's not surprising that we humans now, when we're in the role of these genes,
link |
00:38:34.440
creating other nonhuman entities with a lot of power have to face the same exact challenge.
link |
00:38:39.440
How do we make other powerful entities have incentives that are aligned with ours
link |
00:38:44.440
so they won't hack them?
link |
00:38:46.440
Corporations, for example, right?
link |
00:38:48.440
We humans decided to create corporations because it can benefit us greatly.
link |
00:38:53.440
Now all of a sudden there's a supermarket. I can go buy food there.
link |
00:38:56.440
I don't have to hunt. Awesome.
link |
00:38:59.440
And then to make sure that this corporation would do things that were good for us
link |
00:39:04.440
and not bad for us, we created institutions to keep them in check.
link |
00:39:08.440
Like if the local supermarket sells poisonous food,
link |
00:39:12.440
then the owners of the supermarket have to spend some years reflecting behind bars, right?
link |
00:39:21.440
So we created incentives to get to align them.
link |
00:39:25.440
But of course, just like we were able to see through this thing,
link |
00:39:28.440
well, birth control, if you're a powerful corporation,
link |
00:39:31.440
you also have an incentive to try to hack the institutions that are supposed to govern you
link |
00:39:36.440
because you ultimately as a corporation have an incentive to maximize your profit.
link |
00:39:40.440
Just like you have an incentive to maximize the enjoyment your brain has, not for your genes.
link |
00:39:45.440
So if they can figure out a way of bribing regulators, then they're going to do that.
link |
00:39:51.440
In the US, we kind of caught on to that and made laws against corruption and bribery.
link |
00:39:57.440
Then in the late 1800s, Teddy Roosevelt realized that,
link |
00:40:03.440
no, we were still being kind of hacked because the Massachusetts railroad companies had like a bigger budget
link |
00:40:08.440
than the state of Massachusetts and they were doing a lot of very corrupt stuff.
link |
00:40:13.440
So he did the whole trust busting thing to try to align these other nonhuman entities,
link |
00:40:18.440
the companies, again, more with the incentives of Americans as a whole.
link |
00:40:23.440
It's not surprising though that this is a battle you have to keep fighting.
link |
00:40:27.440
Now we have even larger companies than we ever had before.
link |
00:40:31.440
And of course, they're going to try to, again, subvert the institutions.
link |
00:40:38.440
Not because, you know, I think people make a mistake of getting all too black thinking about things in terms of good and evil.
link |
00:40:47.440
Like arguing about whether corporations are good or evil or whether robots are good or evil.
link |
00:40:53.440
A robot isn't good or evil. It's tool.
link |
00:40:57.440
And you can use it for great things like robotic surgery or for bad things.
link |
00:41:01.440
And a corporation also is a tool, of course.
link |
00:41:04.440
And if you have good incentives to the corporation, it'll do great things like start a hospital or a grocery store.
link |
00:41:10.440
If you have really bad incentives, then it's going to start maybe marketing addictive drugs to people and you'll have an opioid epidemic.
link |
00:41:19.440
It's all about, we should not make a mistake of getting into some sort of fairytale, good, evil thing about corporations or robots.
link |
00:41:30.440
We should focus on putting the right incentives in place.
link |
00:41:33.440
My optimistic vision is that if we can do that, then we can really get good things.
link |
00:41:38.440
We're not doing so great with that right now, either on AI, I think, or on other intelligent, nonhuman entities like big companies.
link |
00:41:46.440
We just have a new secretary of defense.
link |
00:41:50.440
There's going to start up now in the Biden administration who was an active member of the board of Raytheon.
link |
00:41:58.440
I have nothing against Raytheon.
link |
00:42:04.440
I'm not a pacifist, but there's an obvious conflict of interest if someone is in the job where they decide who they're going to contract with.
link |
00:42:14.440
I think somehow we have, maybe we need another Teddy Roosevelt to come along again and say, hey, we want what's good for all Americans.
link |
00:42:23.440
We need to go do some serious realigning again of the incentives that we're giving to these big companies.
link |
00:42:32.440
Then we're going to be better off.
link |
00:42:34.440
Naturally, with human beings, just like you beautifully described the history of this whole thing, it all started with the genes and they're probably pretty upset by all the unintended consequences that happened since.
link |
00:42:45.440
It seems that it kind of works out.
link |
00:42:48.440
It's in this collective intelligence that emerges at the different levels.
link |
00:42:53.440
It seems to find, sometimes last minute, a way to realign the values or keep the values aligned.
link |
00:43:02.440
It finds a way.
link |
00:43:04.440
Different leaders, different humans pop up all over the place that reset the system.
link |
00:43:13.440
Do you have an explanation why that is?
link |
00:43:15.440
Or is that just survivor bias?
link |
00:43:17.440
Also, is that somehow fundamentally different than with the AI systems where you're no longer dealing with something that was a direct, maybe companies are the same, a direct byproduct of the evolutionary process?
link |
00:43:33.440
I think there is one thing which has changed.
link |
00:43:36.440
That's why I'm not all optimistic. That's why I think there's about a 50% chance if we take the dumb route with artificial intelligence that humanity will be extinct in this century.
link |
00:43:51.440
First, just the big picture.
link |
00:43:53.440
Companies need to have the right incentives.
link |
00:43:57.440
Even governments, right?
link |
00:43:59.440
We used to have governments, usually there were just some king who was the king because his dad was the king.
link |
00:44:07.440
Then there were some benefits of having this powerful kingdom or empire of any sort because then it could prevent a lot of local squabbles.
link |
00:44:18.440
So at least everybody in that region would stop warring against each other.
link |
00:44:21.440
Their incentives of different cities in the kingdom became more aligned.
link |
00:44:25.440
That was the whole selling point.
link |
00:44:27.440
Harari.
link |
00:44:28.440
Harari has a beautiful piece on how empires were collaboration enablers.
link |
00:44:35.440
Harari has invented money for that reason so we could have better alignment and trade even with people we didn't know.
link |
00:44:43.440
This sort of stuff has been playing out since time immemorial.
link |
00:44:47.440
What's changed is that it happens on ever larger scales.
link |
00:44:51.440
Technology keeps getting better because science gets better.
link |
00:44:54.440
So now we can communicate over larger distances, transport things faster over larger distances.
link |
00:44:59.440
So the entities get ever bigger but our planet is not getting bigger anymore.
link |
00:45:04.440
So in the past, you could have one experiment that just totally screwed up like Easter Island where they actually managed to have such poor alignment that when they went extinct, people there, there was no one else to come back and replace them.
link |
00:45:20.440
If Elon Musk doesn't get us to Mars and then we go extinct on a global scale, then we're not coming back.
link |
00:45:28.440
That's the fundamental difference.
link |
00:45:30.440
And that's a mistake I would rather we don't make for that reason.
link |
00:45:35.440
In the past, of course, history is full of fiascos, but it was never the whole planet.
link |
00:45:41.440
And then, okay, now there's this nice uninhabited land here.
link |
00:45:45.440
Some other people could move in and organize things better.
link |
00:45:48.440
This is different.
link |
00:45:50.440
The second thing which is also different is that technology gives us so much more empowerment both to do good things and also to screw up.
link |
00:46:00.440
In the Stone Age, even if you had someone whose goals were really poorly aligned,
link |
00:46:04.440
maybe he was really pissed off because his Stone Age girlfriend dumped him and he just wanted to kill as many people as he could.
link |
00:46:12.440
How many could he really take out with a rock and a stick before he was overpowered?
link |
00:46:16.440
Right, just handful, right?
link |
00:46:18.440
Now, with today's technology, if we have an accidental nuclear war between Russia and the US,
link |
00:46:27.440
which we almost have about a dozen times and then we have a nuclear winter,
link |
00:46:32.440
it could take out 7 billion people or 6 billion people, we don't know.
link |
00:46:36.440
So the scale of damage is bigger that we can do.
link |
00:46:40.440
And if there's obviously no law of physics that says that technology will never get powerful enough that we could wipe out our species entirely,
link |
00:46:51.440
that would just be fantasy to think that science is somehow doomed not to get more powerful than that, right?
link |
00:46:57.440
And it's not at all unfeasible in our lifetime that someone could design a designer pandemic which spreads as easily as COVID,
link |
00:47:04.440
but just basically kills everybody.
link |
00:47:06.440
We already had smallpox, it killed one third of everybody who got it.
link |
00:47:12.440
What do you think of the, here's an intuition, maybe it's completely naive and this optimistic intuition I have,
link |
00:47:19.440
which it seems, and maybe it's a biased experience that I have,
link |
00:47:23.440
but it seems like the most brilliant people I've met in my life all are really fundamentally good human beings.
link |
00:47:33.440
And not like naive, good, like they really want to do good for the world in a way that well maybe is aligned to my sense of what good means.
link |
00:47:41.440
And so I have a sense that the people that will be defining the very cutting edge of technology,
link |
00:47:50.440
there will be much more of the ones that are doing good versus the ones that are doing evil.
link |
00:47:55.440
So the race, I'm optimistic on us always like last minute coming up with a solution.
link |
00:48:03.440
So if there's an engineered pandemic that has the capability to destroy most of the human civilization,
link |
00:48:11.440
it feels like to me either leading up to that before or as it's going on,
link |
00:48:17.440
there will be, we're able to rally the collective genius of the human species.
link |
00:48:23.440
I could tell by your smile that you're at least some percentage doubtful,
link |
00:48:29.440
but could that be a fundamental law of human nature that evolution only creates,
link |
00:48:37.440
like karma is beneficial, good is beneficial and therefore will be alright?
link |
00:48:43.440
I hope you're right.
link |
00:48:46.440
I would really love it if you're right,
link |
00:48:48.440
if there's some sort of law of nature that says that we always get lucky in the last second
link |
00:48:52.440
because of karma, but I prefer not playing it so close and gambling on that.
link |
00:49:02.440
And I think, in fact, I think it can be dangerous to have too strong faith in that
link |
00:49:07.440
because it makes us complacent.
link |
00:49:10.440
Like if someone tells you you never have to worry about your house burning down,
link |
00:49:13.440
then you're not going to put in a smoke detector because why would you need to, right?
link |
00:49:16.440
Even if it's sometimes very simple precautions, we don't take them.
link |
00:49:20.440
If you're like, oh, the government is going to take care of everything for us.
link |
00:49:24.440
I can always trust my politicians.
link |
00:49:26.440
We abdicate our own responsibility.
link |
00:49:28.440
I think it's a healthier attitude to say, yeah, maybe things will work out,
link |
00:49:31.440
but maybe I'm actually going to have to myself step up and take responsibility.
link |
00:49:37.440
And the stakes are so huge.
link |
00:49:39.440
I mean, if we do this right, we can develop all this ever more powerful technology
link |
00:49:44.440
and cure all diseases and create a future where humanity is healthy and wealthy
link |
00:49:49.440
or not just the next election cycle, but like billions of years throughout our universe.
link |
00:49:53.440
That's really worth working hard for and not just, you know, sitting and hoping
link |
00:49:58.440
for some sort of fairytale karma.
link |
00:50:00.440
Well, I just mean, so you're absolutely right.
link |
00:50:02.440
From the perspective of the individual, like for me,
link |
00:50:04.440
like the primary thing should be to take responsibility
link |
00:50:07.440
and to build the solutions that your skill set allows to build.
link |
00:50:12.440
Which is a lot.
link |
00:50:13.440
I think we underestimate often very much how much good we can do.
link |
00:50:16.440
If you or anyone listening to this is completely confident that our government
link |
00:50:23.440
would do a perfect job on handling any future crisis with engineered pandemics
link |
00:50:28.440
or future AI.
link |
00:50:30.440
The one or two people out there.
link |
00:50:32.440
On what actually happened in 2020.
link |
00:50:36.440
Do you feel that government by and large around the world is handled flawlessly?
link |
00:50:42.440
That's a really sad and disappointing reality that hopefully is a wake up call for everybody.
link |
00:50:48.440
For the scientists, for the engineers, for the researchers and AI especially.
link |
00:50:54.440
It was disappointing to see how inefficient we were at collecting the right amount of data
link |
00:51:04.440
in a privacy preserving way and spreading that data
link |
00:51:07.440
and utilizing that data to make decisions, all that kind of stuff.
link |
00:51:10.440
I think when something bad happens to me, I made myself a promise many years ago
link |
00:51:17.440
that I would not be a whiner.
link |
00:51:21.440
So when something bad happens to me, of course it's just a process of disappointment.
link |
00:51:27.440
But then I try to focus on what did I learn from this
link |
00:51:30.440
that can make me a better person in the future.
link |
00:51:32.440
And there's usually something to be learned when I fail.
link |
00:51:35.440
And I think we should all ask ourselves, what can we learn from the pandemic
link |
00:51:41.440
about how we can do better in the future?
link |
00:51:43.440
And you mentioned there's a really good lesson.
link |
00:51:46.440
We were not as resilient as we thought we were.
link |
00:51:49.440
And we were not as prepared maybe as we wish we were.
link |
00:51:53.440
You can even see very stark contrast around the planet.
link |
00:51:56.440
South Korea, they have over 50 million people.
link |
00:52:01.440
Do you know how many deaths they have from COVID last time I checked?
link |
00:52:05.440
It's about 500.
link |
00:52:08.440
Why is that?
link |
00:52:10.440
Well, the short answer is that they had prepared.
link |
00:52:16.440
They were incredibly quick, incredibly quick to get on it
link |
00:52:21.440
with very rapid testing and contact tracing and so on,
link |
00:52:25.440
which is why they never had more cases than they could contract trace effectively, right?
link |
00:52:30.440
They even had to have the kind of big lockdowns we had in the West.
link |
00:52:33.440
But the deeper answer to it's not just Koreans are just somehow better people.
link |
00:52:39.440
The reason I think they were better prepared was because they had already had a pretty bad hit
link |
00:52:45.440
from the SARS pandemic, which never became a pandemic.
link |
00:52:49.440
Something like 17 years ago, I think.
link |
00:52:52.440
So it was kind of a fresh memory that we need to be prepared for pandemics.
link |
00:52:56.440
So they were, right?
link |
00:52:58.440
So maybe this is a lesson here for all of us to draw from COVID
link |
00:53:03.440
that rather than just wait for the next pandemic or the next problem
link |
00:53:07.440
with AI getting out of control or anything else,
link |
00:53:10.440
maybe we should just actually set aside a tiny fraction of our GDP
link |
00:53:16.440
to have people very systematically do some horizon scanning
link |
00:53:20.440
and say, okay, what are the things that could go wrong?
link |
00:53:22.440
And let's do get out and see which are the more likely ones
link |
00:53:25.440
and which are the ones that are actually actionable and then be prepared.
link |
00:53:31.440
So one of the observations as one little ant slash human that I am of disappointment
link |
00:53:38.440
is the political division over information that has been observed that I observed this year
link |
00:53:47.440
that it seemed the discussion was less about sort of what happened
link |
00:53:56.440
and understanding what happened deeply and more about there's different truths out there.
link |
00:54:03.440
And it's like an argument, my truth is better than your truth.
link |
00:54:07.440
And it's like red versus blue or different.
link |
00:54:10.440
It was like this ridiculous discourse that doesn't seem to get at any kind of notion of the truth.
link |
00:54:16.440
It's not like there's some kind of scientific process.
link |
00:54:18.440
Even science got politicized in ways that's very heartbreaking to me.
link |
00:54:23.440
You have an exciting project on the AI front of trying to rethink one of the mentioned corporations.
link |
00:54:34.440
There's one of the other collective intelligence systems that have emerged
link |
00:54:38.440
from this is social networks and just the spread of information on the internet,
link |
00:54:46.440
our ability to share that information.
link |
00:54:48.440
There's all different kinds of news sources and so on.
link |
00:54:50.440
And so you said like that's from first principles.
link |
00:54:53.440
Let's rethink how we think about the news, how we think about information.
link |
00:54:59.440
Can you talk about this amazing effort that you're undertaking?
link |
00:55:03.440
Oh, I'd love to.
link |
00:55:04.440
But this has been my big COVID project has been nights and weekends on ever since the lockdown.
link |
00:55:11.440
To segue into this, actually, let me come back to what you said earlier,
link |
00:55:14.440
that you had this hope that in your experience, people who you felt were very talented,
link |
00:55:18.440
often idealistic and wanted to do good.
link |
00:55:21.440
Frankly, I feel the same about all people by and large.
link |
00:55:25.440
There are always exceptions, but I think the vast majority of everybody,
link |
00:55:29.440
regardless of education and whatnot, really are fundamentally good, right?
link |
00:55:33.440
So how can it be that people still do so much nasty stuff?
link |
00:55:37.440
I think it has everything to do with the information that we're given.
link |
00:55:42.440
If you go into Sweden 500 years ago and you start telling all the farmers that those Danes in Denmark,
link |
00:55:49.440
they're so terrible people and we have to invade them because they've done all these terrible things
link |
00:55:55.440
that you can't fact check yourself.
link |
00:55:57.440
A lot of people in Sweden did that.
link |
00:55:59.440
And we've seen so much of this today in the world, both geopolitically,
link |
00:56:09.440
where we are told that China is bad and Russia is bad and Venezuela is bad
link |
00:56:14.440
and people in those countries are often told that we are bad.
link |
00:56:17.440
And we also see it at a micro level, where people are told that,
link |
00:56:22.440
oh, those who voted for the other party are bad people.
link |
00:56:25.440
It's not just an intellectual disagreement, but they're bad people
link |
00:56:30.440
and we're getting ever more divided.
link |
00:56:33.440
And so how do you reconcile this with intrinsic goodness in people?
link |
00:56:40.440
I think it's pretty obvious that it has again to do with this,
link |
00:56:43.440
with information that we're fed and given, right?
link |
00:56:46.440
We evolved to live in small groups where you might know 30 people in total, right?
link |
00:56:52.440
So you then had a system that was quite good for assessing who you could trust
link |
00:56:57.440
and who you could not.
link |
00:56:58.440
And if someone told you that Joe there is a jerk,
link |
00:57:03.440
but you had interacted with him yourself and seen him in action,
link |
00:57:06.440
you would quickly realize maybe that that's actually not quite accurate, right?
link |
00:57:11.440
But now that most people on the planet are people we've never met,
link |
00:57:15.440
it's very important that we have a way of trusting information we're given.
link |
00:57:19.440
So, okay, so where does the news project come in?
link |
00:57:23.440
Well, throughout history, you can go read Machiavelli from the 1400s
link |
00:57:27.440
and you'll see how already then there were busy manipulating people with propaganda and stuff.
link |
00:57:31.440
Propaganda is not new at all.
link |
00:57:35.440
And the incentives to manipulate people is just not new at all.
link |
00:57:39.440
What is it that's new?
link |
00:57:41.440
What's new is machine learning meets propaganda.
link |
00:57:45.440
That's what's new.
link |
00:57:46.440
That's why this has gotten so much worse.
link |
00:57:48.440
Some people like to blame certain individuals like in my liberal university bubble,
link |
00:57:53.440
many people blame Donald Trump and say it was his fault.
link |
00:57:57.440
I see it differently.
link |
00:57:59.440
I think Donald Trump just had this extreme skill at playing this game
link |
00:58:05.440
in the machine learning algorithm age.
link |
00:58:09.440
A game he couldn't have played 10 years ago.
link |
00:58:12.440
So what's changed?
link |
00:58:13.440
What's changed is, well, Facebook and Google and other companies.
link |
00:58:17.440
I'm not a bad man, I think them.
link |
00:58:19.440
I have a lot of friends who work for these companies, good people.
link |
00:58:22.440
They deployed machine learning algorithms just to increase their profit a little bit
link |
00:58:27.440
to just maximize the time people spent watching ads.
link |
00:58:31.440
And they had totally underestimated how effective they were going to be.
link |
00:58:35.440
This was, again, the black box, non intelligible intelligence.
link |
00:58:39.440
They just noticed, oh, we're getting more ad revenue, great.
link |
00:58:42.440
It took a long time until even realize why and how damaging this was for society.
link |
00:58:47.440
Because, of course, what the machine learning figured out was
link |
00:58:51.440
that the by far most effective way of gluing you to your little rectangle
link |
00:58:56.440
was to show you things that triggered strong emotions, anger, et cetera, resentment.
link |
00:59:02.440
And if it was true or not, it didn't really matter.
link |
00:59:07.440
It was also easier to find stories that weren't true.
link |
00:59:10.440
If you weren't limited, that's just a limitation to show people.
link |
00:59:13.440
That's a very limiting fact.
link |
00:59:15.440
And before long, we got these amazing filter bubbles on a scale we had never seen before.
link |
00:59:21.440
A couple of this to the fact that also the online news media were so effective
link |
00:59:28.440
that they killed a lot of print journalism.
link |
00:59:30.440
There's less than half as many journalists now in America, I believe,
link |
00:59:35.440
as there was a generation ago.
link |
00:59:39.440
He just couldn't compete with the online advertising.
link |
00:59:42.440
So, all of a sudden, most people are not getting even reading newspapers.
link |
00:59:48.440
They get their news from social media.
link |
00:59:50.440
And most people only get news in their little bubble.
link |
00:59:55.440
So, along comes now some people like Donald Trump who figured out
link |
00:59:59.440
among the first successful politicians to figure out how to really play this new game
link |
01:00:03.440
and become very, very influential.
link |
01:00:05.440
But I think Donald Trump took advantage of it.
link |
01:00:10.440
He didn't create the fundamental conditions were created by machine learning
link |
01:00:16.440
taking over the news media.
link |
01:00:18.440
So, this is what motivated my little COVID project here.
link |
01:00:23.440
I said before, machine learning and tech in general is not evil,
link |
01:00:27.440
but it's also not good.
link |
01:00:28.440
It's just a tool that you can use for good things or bad things.
link |
01:00:32.440
And as it happens, machine learning and news was mainly used by the big players,
link |
01:00:37.440
big tech, to manipulate people and to watch as many ads as possible,
link |
01:00:42.440
which had this unintended consequence of really screwing up our democracy
link |
01:00:46.440
and fragmenting it into filter bubbles.
link |
01:00:49.440
So, I thought, well, machine learning algorithms are basically free.
link |
01:00:53.440
They can run on your smartphone for free also if someone gives them away to you, right?
link |
01:00:57.440
There's no reason why they only have to help the big guy to manipulate the little guy.
link |
01:01:02.440
They can just as well help the little guy to see through all the manipulation attempts
link |
01:01:07.440
from the big guy.
link |
01:01:08.440
So, did this project, you can go to improvethenews.org.
link |
01:01:12.440
The first thing we've built is this little news aggregator.
link |
01:01:16.440
Looks a bit like Google News except it has these sliders on it
link |
01:01:19.440
to help you break out of your filter bubble.
link |
01:01:21.440
So, if you're reading, you can click click and go to your favorite topic.
link |
01:01:26.440
And then, if you just slide the left right slider all the way over to the left.
link |
01:01:32.440
There's two sliders, right?
link |
01:01:33.440
Yeah.
link |
01:01:34.440
There's the one, the most obvious one is the one that has left to right labeled on us.
link |
01:01:38.440
You go to left, you get one set of articles, you go to the right,
link |
01:01:41.440
you see a very different truth appearing.
link |
01:01:43.440
Well, that's literally left and right on the political spectrum.
link |
01:01:47.440
Yeah, so if you're reading about immigration, for example, it's very, very noticeable.
link |
01:01:54.440
And I think step one, always if you want to not get manipulated,
link |
01:01:58.440
it's just to be able to recognize the techniques people use.
link |
01:02:02.440
So, it's very helpful to just see how they spin things on the two sides.
link |
01:02:06.440
I think many people are under the misconception that the main problem is fake news.
link |
01:02:13.440
It's not.
link |
01:02:14.440
I had an amazing team of MIT students where we did an academic project,
link |
01:02:19.440
used machine learning to detect the main kinds of bias over the summer.
link |
01:02:24.440
Yes, of course, sometimes there's fake news where someone just claims something that's false, right?
link |
01:02:30.440
Like, oh, Hillary Clinton just got divorced or something.
link |
01:02:33.440
Yes.
link |
01:02:34.440
But what we see much more of is actually just omissions.
link |
01:02:38.440
If you go to, there's some stories which just won't be mentioned by the left or the right
link |
01:02:45.440
because it doesn't suit their agenda.
link |
01:02:47.440
And then they also mentioned other ones very, very, very much.
link |
01:02:50.440
So, for example, we've had a number of stories about the Trump family's financial dealings.
link |
01:03:00.440
And then there's been a bunch of stories about the Biden family's, Hunter Biden's financial dealings, right?
link |
01:03:06.440
Surprise, surprise, they don't get equal coverage on the left and the right.
link |
01:03:10.440
One side loves to cover the Biden, Hunter Biden's stuff.
link |
01:03:14.440
And one side loves to cover the Trump, you can never guess which is which, right?
link |
01:03:18.440
But the great news is if you're a normal American citizen and you dislike corruption in all its forms,
link |
01:03:25.440
then slide, slide, you can just look at both sides and you'll see all those political corruption stories.
link |
01:03:33.440
It's really liberating to just take in the both sides, the spin on both sides.
link |
01:03:40.440
It somehow unlocks your mind to think on your own, to realize that, I don't know,
link |
01:03:47.440
it's the same thing that was useful in the Soviet Union times for when everybody was much more aware that they're surrounded by propaganda.
link |
01:03:58.440
That is so interesting what you're saying, actually.
link |
01:04:01.440
So, Noam Chomsky used to be our MIT colleague once said that propaganda is to democracy.
link |
01:04:08.440
What violence is to totalitarianism.
link |
01:04:12.440
And what he means by that is if you have a really totalitarian government, you don't need propaganda.
link |
01:04:20.440
People will do what you want them to do anyway out of fear, right?
link |
01:04:24.440
But otherwise, you need propaganda.
link |
01:04:28.440
So, I would say actually that the propaganda is much higher quality in democracies, much more believable.
link |
01:04:34.440
And it's really striking when I talk to colleagues, science colleagues like from Russia and China and so on,
link |
01:04:42.440
I notice they are actually much more aware of the propaganda in their own media
link |
01:04:47.440
than many of my American colleagues are about the propaganda in Western media.
link |
01:04:51.440
That's brilliant. That means the propaganda in the Western media is just better.
link |
01:04:55.440
Yes, that's so brilliant.
link |
01:04:57.440
Even the propaganda.
link |
01:05:05.440
But once you realize that, you realize there's also something very optimistic there that you can do about it, right?
link |
01:05:10.440
Because, first of all, omissions.
link |
01:05:13.440
As long as there's no outright censorship, you can just look at both sides
link |
01:05:19.440
and pretty quickly piece together a much more accurate idea of what's actually going on, right?
link |
01:05:25.440
And develop a natural skepticism too.
link |
01:05:28.440
Just an analytical scientific mind about what you're taking information from.
link |
01:05:33.440
And I think, I have to say, sometimes I feel that some of us in the academic bubble are too arrogant about this
link |
01:05:40.440
and somehow think, oh, it's just people who aren't as educated as us for a fool.
link |
01:05:45.440
When we are often just as gullible also, we read only our media and don't see through things.
link |
01:05:51.440
Anyone who looks at both sides like this in comparison will immediately start noticing the shenanigans being pulled at.
link |
01:05:58.440
And I think what I try to do with this app is that big tech has to some extent tried to blame the individual
link |
01:06:07.440
for being manipulated much like big tobacco tried to blame the individuals entirely for smoking.
link |
01:06:13.440
And later on, our government stepped up and said, actually, you can't just blame little kids for starting to smoke.
link |
01:06:20.440
You have to have more responsible advertising and this and that.
link |
01:06:23.440
I think it's a bit the same here. It's very convenient for a big tech to blame.
link |
01:06:27.440
So it's just people who are so dumb and get fooled.
link |
01:06:32.440
The blame usually comes in saying, oh, it's just human psychology.
link |
01:06:36.440
People just want to hear what they already believe.
link |
01:06:38.440
But Professor David Rand at MIT actually partly debunked that with a really nice study showing that people tend to be interested
link |
01:06:46.440
in hearing things that go against what they believe if it's presented in a respectful way.
link |
01:06:52.440
Suppose, for example, that you have a company and you're just about to launch this project and you're convinced it's going to work.
link |
01:07:00.440
And someone says, you know, Lex, I hate to tell you this, but this is going to fail.
link |
01:07:05.440
And here's why. Would you be like, shut up. I don't want to hear it.
link |
01:07:09.440
Would you? You would be interested, right?
link |
01:07:12.440
And also, if you're on an airplane back in the pre COVID times, you know,
link |
01:07:18.440
and the guy next to you is clearly from the opposite side of the political spectrum,
link |
01:07:24.440
but is very respectful and polite to you.
link |
01:07:27.440
Wouldn't you be kind of interested to hear a bit about how he or she thinks about things?
link |
01:07:32.440
Of course.
link |
01:07:33.440
But it's not so easy to find out respectful disagreement now,
link |
01:07:37.440
because like, for example, if you are a Democrat and you're like, oh, I want to see something on the other side.
link |
01:07:43.440
So you just go bright bar.com.
link |
01:07:45.440
And then after the first 10 seconds, you feel deeply insulted by something.
link |
01:07:50.440
It's not going to work.
link |
01:07:53.440
Or if you take someone who votes Republican and they go to something on the left and they just get very offended very quickly
link |
01:08:00.440
by them having put a deliberately ugly picture of Donald Trump on the front page or something, it doesn't really work.
link |
01:08:06.440
So this news aggregator also has a nuanced slider, which you can pull to the right.
link |
01:08:12.440
And then to make it easier to get exposed to actually more sort of academic style or more respectful portrayals of different views.
link |
01:08:22.440
And finally, the one kind of bias I think people are mostly aware of is the left right,
link |
01:08:28.440
because it's so obvious because both left and right are very powerful here, right?
link |
01:08:33.440
Both of them have well funded TV stations and newspapers and it's kind of hard to miss.
link |
01:08:38.440
But there's another one, the establishment slider, which is also really fun.
link |
01:08:44.440
I love to play with it.
link |
01:08:45.440
And that's more about corruption.
link |
01:08:47.440
Because if you have a society where almost all the powerful entities want you to believe a certain thing,
link |
01:08:59.440
that's what you're going to read in both the big mainstream media on the left and on the right, of course.
link |
01:09:04.440
And powerful companies can push back very hard.
link |
01:09:08.440
Like tobacco companies push back very hard back in the day when some newspaper started writing articles about tobacco being dangerous.
link |
01:09:15.440
So it was hard to get a lot of coverage about it initially.
link |
01:09:18.440
And also if you look geopolitically, right?
link |
01:09:20.440
Of course, in any country when you read their media, you're mainly going to be reading a lot about articles about how our country is the good guy
link |
01:09:27.440
and the other countries are the bad guys, right?
link |
01:09:30.440
So if you want to have a really more nuanced understanding, you know, like the Germans used to be told that the British used to be told that the French were the bad guys
link |
01:09:38.440
and the French used to be told that the British were the bad guys.
link |
01:09:41.440
Now they visit each other's countries a lot and have a much more nuanced understanding.
link |
01:09:47.440
I don't think there's going to be any more wars between France and Germany.
link |
01:09:50.440
On the geopolitical scale, it's just as much as ever, you know, big Cold War now, US, China, and so on.
link |
01:09:57.440
And if you want to get a more nuanced understanding of what's happening geopolitically, then it's really fun to look at this establishment slider
link |
01:10:05.440
because it turns out there are tons of little newspapers, both on the left and on the right, who sometimes challenge establishment
link |
01:10:13.440
and say, you know, maybe we shouldn't actually invade Iraq right now.
link |
01:10:17.440
Maybe this weapons and mass destruction thing is BS.
link |
01:10:20.440
If you look at journalism research afterwards, you can actually see that.
link |
01:10:24.440
Clearly, both CNN and Fox were very pro.
link |
01:10:28.440
Let's get rid of Saddam.
link |
01:10:30.440
There are weapons and mass destruction.
link |
01:10:32.440
Then there were a lot of smaller newspapers.
link |
01:10:34.440
They were like, wait a minute, this evidence seems a bit sketchy and maybe we...
link |
01:10:39.440
But of course, they were so hard to find.
link |
01:10:41.440
Most people didn't even know they existed, right?
link |
01:10:44.440
Yet, it would have been better for American national security if those voices had also come up.
link |
01:10:49.440
I think it harmed America's national security, actually, that we invaded Iraq.
link |
01:10:53.440
And arguably, there's a lot more interest in that kind of thinking, too, from those small sources.
link |
01:11:00.440
So, like, when you say big, it's more about kind of the reach of the broadcast.
link |
01:11:08.440
But it's not big in terms of the interest.
link |
01:11:11.440
I think there's a lot of interest in that kind of antiestablishment or skepticism towards...
link |
01:11:18.440
Out of the box thinking, there's a lot of interest in that kind of thing.
link |
01:11:21.440
Do you see this news project or something like it being basically taken over the world as the main way we consume information?
link |
01:11:32.440
Like, how do we get there?
link |
01:11:35.440
So, okay, the idea is brilliant. You're calling it your little project in 2020.
link |
01:11:43.440
But how does that become the new way we consume information?
link |
01:11:48.440
I hope, first of all, just to plant a little seed there.
link |
01:11:50.440
Because normally, the big barrier of doing anything in media is you need a ton of money.
link |
01:11:56.440
But this costs no money at all.
link |
01:11:58.440
I've just been paying myself.
link |
01:12:00.440
You pay a tiny amount of money each month to Amazon to run the thing in their cloud.
link |
01:12:04.440
There will never be any ads.
link |
01:12:06.440
The point is not to make any money off of it.
link |
01:12:09.440
And we just train machine learning algorithms to classify the articles and stuff.
link |
01:12:13.440
So, it just kind of runs by itself.
link |
01:12:15.440
So, if it actually gets good enough at some point that it starts catching on, it could scale.
link |
01:12:20.440
And if other people carbon copy it and make other versions that are better, that's the more the merrier.
link |
01:12:28.440
I think there's a real opportunity for machine learning to empower the individual against the powerful players.
link |
01:12:39.440
As I said in the beginning here, it's been mostly the other way around so far,
link |
01:12:43.440
that the big players have the AI and then they tell people this is the truth, this is how it is.
link |
01:12:49.440
But it can just as well go the other way around.
link |
01:12:52.440
When the internet was born, actually, a lot of people had this hope that maybe this will be a great thing for democracy,
link |
01:12:57.440
make it easier to find out about things.
link |
01:12:59.440
And maybe machine learning and things like this can actually help again.
link |
01:13:03.440
And I have to say, I think it's more important than ever now,
link |
01:13:06.440
because this is very linked also to the whole future of life as we discussed earlier.
link |
01:13:13.440
We're getting this ever more powerful tack.
link |
01:13:16.440
Frank, it's pretty clear if you look on the one or two generation, three generation timescale,
link |
01:13:21.440
that there are only two ways this can end, geopolitically.
link |
01:13:24.440
Either it ends great for all humanity, or it ends terribly for all of us.
link |
01:13:31.440
There's really no way in between.
link |
01:13:33.440
And we're so stuck in, because technology knows no borders.
link |
01:13:38.440
And you can't have people fighting when the weapons just keep getting ever more powerful indefinitely.
link |
01:13:46.440
Eventually, the luck runs out.
link |
01:13:50.440
And right now we have, I love America, but the fact of the matter is,
link |
01:13:58.440
what's good for America is not opposites in the long term to what's good for other countries.
link |
01:14:04.440
It would be if this was some sort of zero sum game like it was thousands of years ago,
link |
01:14:10.440
when the only way one country could get more resources was to take land from other countries,
link |
01:14:15.440
because that was basically the resource.
link |
01:14:17.440
Look at the map of Europe, some countries kept getting bigger and smaller, endless wars.
link |
01:14:22.440
But then, since 1945, there hasn't been any war in Western Europe, and they all got way richer, because of tech.
link |
01:14:29.440
So the optimistic outcome is that the big winner in this century is going to be America and China,
link |
01:14:37.440
and Russia, and everybody else, because technology just makes us all healthier and wealthier.
link |
01:14:41.440
And we just find some way of keeping the peace on this planet.
link |
01:14:46.440
But I think, unfortunately, there are some pretty powerful forces right now
link |
01:14:50.440
that are pushing in exactly the opposite direction and trying to demonize other countries,
link |
01:14:54.440
which just makes it more likely that this ever more powerful tech we're building is going to be in disastrous ways.
link |
01:15:02.440
Yeah, for aggression versus cooperation, that kind of thing.
link |
01:15:05.440
Yeah, even look at just military AI now, right?
link |
01:15:08.440
It was so awesome to see these dancing robots.
link |
01:15:12.440
I loved it, right? But one of the biggest growth areas in robotics now is, of course, autonomous weapons.
link |
01:15:19.440
And 2020 was like the best marketing year ever for autonomous weapons,
link |
01:15:24.440
because in both Libya, Civil War, and in Nagorno Karabakh, they made the decisive difference, right?
link |
01:15:34.440
And everybody else is like watching this, oh yeah, we want to build autonomous weapons too.
link |
01:15:38.440
In Libya, you had, on one hand, our ally, the United Arab Emirates,
link |
01:15:46.440
that were flying their autonomous weapons that they bought from China, bombing Libyans.
link |
01:15:51.440
And on the other side, you had our other ally, Turkey, flying their drones.
link |
01:15:56.440
They had no skin in the game, any of these other countries.
link |
01:16:01.440
And of course, it was the Libyans who really got screwed.
link |
01:16:04.440
In Nagorno Karabakh, you had actually, again, now Turkey is sending drones built by this company
link |
01:16:12.440
that was actually founded by a guy who went to MIT AeroAstrode, you know that?
link |
01:16:18.440
So MIT has a direct responsibility for ultimately this.
link |
01:16:22.440
And a lot of civilians were killed there.
link |
01:16:25.440
So because it was militarily so effective, now suddenly there's like a huge push.
link |
01:16:31.440
Yeah, yeah, let's go build ever more autonomy into these weapons.
link |
01:16:37.440
And it's going to be great.
link |
01:16:39.440
And I think actually people who are obsessed about some sort of future terminers,
link |
01:16:46.440
NATO scenario right now, should start focusing on the fact that we have two
link |
01:16:52.440
much more urgent threats happening for machine learning.
link |
01:16:54.440
One of them is the whole destruction of democracy that we've talked about now,
link |
01:16:58.440
where our flow of information is being manipulated by machine learning.
link |
01:17:03.440
And the other one is that right now, you know, this is the year when the big arms race
link |
01:17:08.440
out of control arms race in at least Thomas weapons is going to start or it's going to stop.
link |
01:17:14.440
So you have a sense that there is, like 2020 was an instrumental catalyst for the race of the autonomous weapons race.
link |
01:17:23.440
Yeah, because it was the first year when they proved decisive in the battlefield.
link |
01:17:27.440
And these ones are still not fully autonomous, mostly they're remote controlled, right?
link |
01:17:32.440
But, you know, we could very quickly make things about, you know, the size and cost of a smartphone,
link |
01:17:41.440
which you just put in the GPS coordinates or the face of the one you want to kill,
link |
01:17:45.440
a skin color or whatever and it flies away and does it.
link |
01:17:48.440
And the real good reason why the US and all the other superpowers should put the kibosh on this
link |
01:17:56.440
is the same reason we decided to put the kibosh on bio weapons.
link |
01:18:01.440
So, you know, we gave the future of life award that we can talk more about later.
link |
01:18:06.440
Matthew Messelsen from Harvard before for convincing Nixon to ban bio weapons.
link |
01:18:10.440
And I asked him, how did you do it?
link |
01:18:13.440
And he was like, well, I just said, look, we don't want there to be a $500 weapon of mass destruction
link |
01:18:20.440
that even all our enemies can afford, even non state actors.
link |
01:18:26.440
And Nixon was like, good point.
link |
01:18:31.440
You know, it's in America's interest that the powerful weapons are all really expensive.
link |
01:18:36.440
So only we can afford them or maybe some more stable adversaries, right?
link |
01:18:40.440
Nuclear weapons are like that.
link |
01:18:42.440
But bio weapons were not like that.
link |
01:18:44.440
That's why we banned them.
link |
01:18:46.440
And that's why you never hear about them now.
link |
01:18:48.440
That's why we love biology.
link |
01:18:50.440
So you have a sense that it's possible for the big powerhouses in terms of the big nations in the world
link |
01:18:58.440
to agree that autonomous weapons is not a race we want to be on.
link |
01:19:02.440
That it doesn't end well.
link |
01:19:04.440
Yeah, because we know it's just going to end in mass proliferation
link |
01:19:06.440
and every terrorist everywhere is going to have these super cheap weapons that they will use against us.
link |
01:19:12.440
And our politicians have to constantly worry about being assassinated every time they go outdoors
link |
01:19:18.440
by some anonymous little mini drone.
link |
01:19:20.440
We don't want that.
link |
01:19:22.440
And even if the U.S. and China and everyone else could just agree that you can only build these weapons
link |
01:19:28.440
if they cost at least 10 million bucks, that would be a huge win for the superpowers.
link |
01:19:34.440
And frankly for everybody, people often push back and say,
link |
01:19:40.440
well, it's so hard to prevent cheating.
link |
01:19:42.440
But hey, you can say the same about bioweapons.
link |
01:19:45.440
Take any of your RMIT colleagues in biology.
link |
01:19:49.440
Of course they could build some nasty bioweapon if they really wanted to.
link |
01:19:53.440
But first of all, they don't want to because they think it's disgusting because of the stigma.
link |
01:19:57.440
And second, even if there's some sort of nutcase and want to,
link |
01:20:01.440
it's very likely that some of their grad students or someone would rat them out
link |
01:20:05.440
because everyone else thinks it's so disgusting.
link |
01:20:07.440
And in fact, we now know there was even a fair bit of cheating on the bioweapons ban.
link |
01:20:12.440
But none, no countries used them because it was so stigmatized
link |
01:20:17.440
that it just wasn't worth revealing that they had cheated.
link |
01:20:22.440
You talk about drones, but you kind of think that drones is the remote operation.
link |
01:20:28.440
Which they are mostly still.
link |
01:20:30.440
But you're not taking the next intellectual step of like, where does this go?
link |
01:20:36.440
You're kind of saying the problem with drones is that you're removing yourself from direct violence.
link |
01:20:42.440
Therefore, you're not able to sort of maintain the common humanity
link |
01:20:45.440
required to make the proper decisions strategically.
link |
01:20:48.440
But that's the criticism as opposed to like, if this is automated,
link |
01:20:52.440
and just exactly as you said, if you automate it and there's a race,
link |
01:20:58.440
then the technology is going to get better and better and better,
link |
01:21:01.440
which means getting cheaper and cheaper and cheaper.
link |
01:21:03.440
And unlike perhaps nuclear weapons, which is connected to resources in a way,
link |
01:21:10.440
like it's hard to get the, it's hard to engineer.
link |
01:21:13.440
It feels like it's, you know, there's too much overlap between the tech industry
link |
01:21:19.440
and autonomous weapons to where you could have smartphone type of cheapness.
link |
01:21:24.440
If you look at drones, you know, it's a, you know, for $1,000,
link |
01:21:29.440
you have an incredible system that's able to maintain flight autonomously for you
link |
01:21:34.440
and take pictures and stuff.
link |
01:21:36.440
You could see that going into the autonomous weapon space that's,
link |
01:21:41.440
but like, why is that not thought about or discussed enough in the public?
link |
01:21:45.440
Do you think you see those dancing Boston Dynamics robots and everybody has this kind of,
link |
01:21:52.440
like as if this is like a far future.
link |
01:21:55.440
They have this like fear, like, oh, this will be Terminator in like some,
link |
01:21:59.440
I don't know, unspecified 20, 30, 40 years.
link |
01:22:02.440
And they don't think about, well, this is like some much less dramatic version of that is actually happening now.
link |
01:22:11.440
It's not going to have, it's not going to be legged, it's not going to be dancing,
link |
01:22:14.440
but it's already has the capability to use artificial intelligence to kill humans.
link |
01:22:20.440
Yeah, the Boston Dynamics leg robots, I think the reason we imagine them holding guns
link |
01:22:24.440
is just because you've all seen Arnold Schwarzenegger, right?
link |
01:22:28.440
That's our reference point.
link |
01:22:30.440
That's pretty useless.
link |
01:22:32.440
That's not going to be the main military use of them.
link |
01:22:35.440
They might be useful in law enforcement in the future.
link |
01:22:38.440
And there's a whole debate about you want robots showing up at your house with guns
link |
01:22:42.440
telling you who'll be perfectly obedient to whatever dictator controls them.
link |
01:22:47.440
But let's leave that aside for a moment and look at what's actually relevant now.
link |
01:22:51.440
There's a spectrum of things you can do with AI in the military.
link |
01:22:55.440
And again, to put my card on the table, I'm not the pacifist.
link |
01:22:58.440
I think we should have good defense.
link |
01:23:01.440
So, for example, a predator drone is basically a fancy little remote controlled airplane.
link |
01:23:10.440
There's a human piloting it and the decision ultimately about whether to kill somebody with it is made by a human still.
link |
01:23:18.440
And this is a line I think we should never cross.
link |
01:23:23.440
There's a current DOD policy.
link |
01:23:25.440
Again, you have to have a human in the loop.
link |
01:23:27.440
I think algorithms should never make life or death decisions.
link |
01:23:31.440
They should be left to humans.
link |
01:23:33.440
Now, why might we cross that line?
link |
01:23:37.440
Well, first of all, these are expensive, right?
link |
01:23:40.440
So, for example, when Azerbaijan had all these drones and Armenia didn't have any,
link |
01:23:47.440
they started trying to jerry rig little cheap things, fly around.
link |
01:23:51.440
But then, of course, the Armenians would jam them.
link |
01:23:53.440
The Azeris would jam them.
link |
01:23:55.440
And remote controlled things can be jammed.
link |
01:23:57.440
That makes them inferior.
link |
01:23:59.440
Also, there's a bit of a time delay between, you know, if we're piloting something far away,
link |
01:24:05.440
speed of light, and the human has a reaction time as well,
link |
01:24:09.440
it would be nice to eliminate that jamming possibility in the time delay by having it fully autonomous.
link |
01:24:14.440
But now you might be crossing that exact line.
link |
01:24:19.440
You might program it to just, oh, yeah, dear drone, go hover over this country for a while
link |
01:24:25.440
and whenever you find someone who is a bad guy, you know, kill them.
link |
01:24:30.440
Now, the machine is making these sort of decisions.
link |
01:24:33.440
And some people who defend this still say, well, that's morally fine
link |
01:24:37.440
because we are the good guys and we will tell it the definition of bad guy
link |
01:24:43.440
that we think is moral.
link |
01:24:45.440
But now it would be very naive to think that if ISIS buys that same drone
link |
01:24:51.440
that they're going to use our definition of bad guy.
link |
01:24:54.440
Maybe for them, bad guy is someone wearing a U.S. Army uniform.
link |
01:24:58.440
Or maybe there will be some weird ethnic group
link |
01:25:06.440
who decides that someone of an other ethnic group, they are the bad guys, right?
link |
01:25:10.440
The thing is, human soldiers, with all of our faults, right,
link |
01:25:14.440
we still have some basic wiring in us.
link |
01:25:17.440
Like, no, it's not okay to kill kids and civilians.
link |
01:25:22.440
And Thomas Reppin has none of that.
link |
01:25:24.440
It's just going to do whatever is programmed.
link |
01:25:26.440
It's like the perfect Adolf Eichmann on steroids.
link |
01:25:30.440
Like, they told him, Adolf Eichmann, you know, you want you to do this and this and this
link |
01:25:34.440
to make the Holocaust more efficient.
link |
01:25:36.440
And he was like, yeah, and off he went and did it, right?
link |
01:25:40.440
Do we really want to make machines that are like that, like completely amoral
link |
01:25:45.440
and will take the user's definition of who is the bad guy?
link |
01:25:48.440
And do we then want to make them so cheap that all our adversaries can have them?
link |
01:25:52.440
Like, what could possibly go wrong?
link |
01:25:55.440
That's the big argument for why we want to, this year, really put the kibosh on this.
link |
01:26:03.440
And I think you can tell there's a lot of very active debate even going on
link |
01:26:08.440
within the U.S. military and undoubtedly in other militaries around the world also
link |
01:26:12.440
about whether we should have some sort of international agreement
link |
01:26:15.440
to at least require that these weapons have to be above a certain size and cost,
link |
01:26:21.440
so that things just don't totally spiral out of control.
link |
01:26:29.440
And finally, just for your question now, but is it possible to stop it?
link |
01:26:33.440
Because some people tell me, oh, just give up, you know.
link |
01:26:36.440
But again, so Matthew Messelsen again from Harvard, right, who, the bio weapons hero,
link |
01:26:44.440
he had exactly this criticism also with bio weapons.
link |
01:26:47.440
People were like, how can you check for sure that the Russians aren't cheating?
link |
01:26:52.440
And he told me this, I think really ingenious insight, he said, you know, Max,
link |
01:27:00.440
some people think you have to have inspections and things,
link |
01:27:03.440
and you have to make sure that people, you can catch the cheaters with 100% chance.
link |
01:27:08.440
You don't need 100%, he said.
link |
01:27:10.440
1% is usually enough.
link |
01:27:13.440
Because if it's another big state,
link |
01:27:18.440
I suppose China and the US have signed a treaty, drawing a certain line and saying,
link |
01:27:24.440
yeah, these kind of drones are okay, but these fully autonomous ones are not.
link |
01:27:28.440
Now suppose you are China and you have cheated and secretly developed some clandestine little thing,
link |
01:27:35.440
or you're thinking about doing it, you know, what's your calculation that you do?
link |
01:27:39.440
Well, you're like, okay, what's the probability that we're going to get caught?
link |
01:27:44.440
If the probability is 100%, of course, we're not going to do it.
link |
01:27:48.440
But if the probability is 5% that we're going to get caught,
link |
01:27:52.440
then it's going to be like a huge embarrassment for us.
link |
01:27:55.440
And we still have our nuclear weapons anyway, so it doesn't really make any enormous difference
link |
01:28:04.440
in terms of deterring the US, you know.
link |
01:28:07.440
And that feeds the stigma that you kind of establish, like this fabric, this universal stigma over the thing.
link |
01:28:14.440
Exactly.
link |
01:28:15.440
It's very reasonable for them to say, well, you know, we probably get away with it,
link |
01:28:18.440
but if we don't, then the US will know we cheated,
link |
01:28:21.440
and then they're going to go full tilt with their program and say, look, the Chinese are cheaters,
link |
01:28:24.440
and now we have all these weapons against us, and that's bad.
link |
01:28:27.440
So the stigma alone is very, very powerful.
link |
01:28:31.440
And again, look what happened with bioweapons, right?
link |
01:28:34.440
It's been 50 years now.
link |
01:28:36.440
When was the last time you read about a bioterrorism attack?
link |
01:28:39.440
The only deaths I really know about with bioweapons that have happened,
link |
01:28:43.440
when we Americans managed to kill some of our own with anthrax,
link |
01:28:46.440
you know, the idiot who sent them to Tom Daschel and others in letters, right?
link |
01:28:50.440
And similarly, in Sverlovsk in the Soviet Union,
link |
01:28:55.440
they had some anthrax in some lab there.
link |
01:28:57.440
Maybe they were cheating or who knows,
link |
01:28:59.440
and it leaked out and killed a bunch of Russians.
link |
01:29:01.440
I'd say that's a pretty good success, right?
link |
01:29:04.440
50 years, just two own goals by the superpowers, and then nothing.
link |
01:29:09.440
And that's why whenever I ask anyone what they think about biology,
link |
01:29:13.440
they think it's great.
link |
01:29:15.440
They associate it with new cures, new diseases, maybe a good vaccine.
link |
01:29:19.440
This is how I want to think about AI in the future.
link |
01:29:22.440
And I want others to think about AI too,
link |
01:29:24.440
as a source of all these great solutions to our problems,
link |
01:29:27.440
not as, oh, AI.
link |
01:29:30.440
Oh, yeah, that's the reason I feel scared going outside these days.
link |
01:29:34.440
Yeah, it's kind of brilliant that the bio weapons and nuclear weapons,
link |
01:29:39.440
we've figured out, I mean, of course, there's still a huge source of danger,
link |
01:29:43.440
but we figured out some way of creating rules and social stigma
link |
01:29:50.440
over these weapons that then creates a stability to our,
link |
01:29:54.440
whatever that game theoretic stability there, of course.
link |
01:29:56.440
Exactly, exactly.
link |
01:29:57.440
And we don't have that with AI, and you're kind of screaming from the top
link |
01:30:01.440
of the mountain about this, that we need to find that,
link |
01:30:05.440
because just like, it's very possible with the future of life,
link |
01:30:10.440
as you've pointed out, Institute Awards pointed out that with nuclear weapons,
link |
01:30:17.440
we could have destroyed ourselves quite a few times.
link |
01:30:21.440
And it's a learning experience that is very costly.
link |
01:30:28.440
We gave this Future Life Award, we gave it the first time to this guy,
link |
01:30:33.440
Vasily Arkhipov, he was on, most people haven't even heard of him.
link |
01:30:37.440
Yeah, can you say who he is?
link |
01:30:38.440
Vasily Arkhipov, he has, in my opinion,
link |
01:30:43.440
made the greatest positive contribution to humanity of any human in modern history.
link |
01:30:49.440
And maybe it sounds like hyperbole here, like I'm just over the top,
link |
01:30:53.440
but let me tell you the story, and I think maybe you'll agree.
link |
01:30:55.440
So during the Cuban Missile Crisis, we Americans first didn't know
link |
01:31:01.440
that the Russians had sent four submarines, but we caught two of them,
link |
01:31:06.440
and we didn't know that, so we dropped practice depth charges on the one that he was on,
link |
01:31:11.440
trying to force it to the surface.
link |
01:31:14.440
But we didn't know that this nuclear submarine actually was a nuclear submarine
link |
01:31:18.440
with a nuclear torpedo.
link |
01:31:20.440
We also didn't know that they had an authorization to launch it without clearance from Moscow.
link |
01:31:24.440
And we also didn't know that they were running out of electricity,
link |
01:31:28.440
their batteries were almost dead, they were running out of oxygen,
link |
01:31:31.440
sailors were fainting left and right.
link |
01:31:34.440
The temperature was about 110, 120 Fahrenheit on board,
link |
01:31:39.440
it was really hellish conditions, really just a kind of doomsday.
link |
01:31:42.440
And at that point, these giant explosions start happening
link |
01:31:46.440
from Americans dropping these.
link |
01:31:48.440
The captain thought World War III had begun.
link |
01:31:50.440
They decided that they were going to launch the nuclear torpedo.
link |
01:31:53.440
And one of them shouted, you know, we're all going to die,
link |
01:31:56.440
but we're not going to disgrace our navy.
link |
01:31:58.440
We don't know what would have happened if there had been a giant mushroom cloud all of a sudden
link |
01:32:03.440
against Americans, but since everybody had their hands on the triggers,
link |
01:32:08.440
you don't have to be too creative to think that it could have led to an all out nuclear war,
link |
01:32:12.440
in which case we wouldn't be having this conversation now, right?
link |
01:32:15.440
What actually took place was they needed three people to approve this.
link |
01:32:20.440
The captain had said yes, there was the Communist Party political officer,
link |
01:32:24.440
he also said yes, let's do it.
link |
01:32:26.440
And the third man was this guy Vasily Arkhipov, who said,
link |
01:32:29.440
yeah, for some reason he was just more chill than the others
link |
01:32:34.440
and he was the right man at the right time.
link |
01:32:36.440
I don't want us as a species rely on the right person being there at the right time.
link |
01:32:41.440
You know, we tracked down his family living in relative poverty outside Moscow.
link |
01:32:48.440
When he flew his daughter, he had passed away and flew them to London.
link |
01:32:54.440
They had never been to the West even.
link |
01:32:55.440
It was incredibly moving to get to honor them for this.
link |
01:32:58.440
The next year we gave this future life award to Stanislav Petrov.
link |
01:33:03.440
Have you heard of him?
link |
01:33:04.440
Yes.
link |
01:33:05.440
He was in charge of the Soviet early warning station which was built with Soviet technology
link |
01:33:12.440
and honestly not that reliable.
link |
01:33:14.440
It said that there were five US missiles coming in.
link |
01:33:17.440
Again, if they had launched at that point, we probably wouldn't be having this conversation.
link |
01:33:23.440
He decided based on just mainly gut instinct to just not escalate this.
link |
01:33:32.440
I'm very glad he wasn't replaced by an AI that was just automatically falling orders.
link |
01:33:37.440
Then we gave the third one to Matthew Messelsen.
link |
01:33:39.440
Last year we gave this award to these guys who actually used technology for good,
link |
01:33:46.440
not avoiding something bad, but for something good.
link |
01:33:49.440
The guys who eliminated this disease, which is way worse than COVID,
link |
01:33:54.440
that had killed half a billion people in its violent century.
link |
01:33:58.440
Smallpox.
link |
01:33:59.440
You mentioned it earlier.
link |
01:34:02.440
COVID on average kills less than 1% of people who get it.
link |
01:34:05.440
Smallpox, about 30%.
link |
01:34:08.440
Ultimately, Viktor Zhdanov and Bill Fagy, most of my colleagues have never heard of either of them,
link |
01:34:17.440
one American, one Russian, they did this amazing effort.
link |
01:34:22.440
Not only was Zhdanov able to get the US and the Soviet Union to team up against smallpox
link |
01:34:26.440
during the Cold War,
link |
01:34:28.440
but Fagy came up with this ingenious strategy for making it actually go all the way
link |
01:34:33.440
to defeat the disease without funding for vaccinating everyone.
link |
01:34:38.440
As a result, we went from 15 million deaths the year I was born in smallpox.
link |
01:34:44.440
So what do we have in COVID now?
link |
01:34:46.440
A little bit short of 2 million, right?
link |
01:34:47.440
Yes.
link |
01:34:48.440
To zero deaths, of course, this year.
link |
01:34:50.440
And forever, there have been 200 million people,
link |
01:34:54.440
who would have died since then by smallpox had it not been for this.
link |
01:34:58.440
So isn't science awesome when you use it for good?
link |
01:35:01.440
And the reason we want to celebrate these sort of people is to remind them of this.
link |
01:35:05.440
Science is so awesome when you use it for good.
link |
01:35:09.440
And those awards actually, the variety there, it's a very interesting picture.
link |
01:35:14.440
So the first two are looking at, it's kind of exciting to think that these average humans,
link |
01:35:22.440
in some sense, there are products of billions of other humans that came before them, evolution.
link |
01:35:29.440
And some little, you said gut, but there's something in there that stopped the annihilation of the human race.
link |
01:35:40.440
And that's a magical thing, but that's like this deeply human thing.
link |
01:35:44.440
And then there's the other aspect where it's also very human,
link |
01:35:49.440
which is to build solution to the existential crises that we're facing,
link |
01:35:55.440
to build it, to take responsibility, to come up with different technologies and so on.
link |
01:36:00.440
And both of those are deeply human.
link |
01:36:03.440
The gut and the mind, whatever that is.
link |
01:36:07.440
The best is when they work together.
link |
01:36:08.440
Archipelago, I wish I could have met him, of course, but he had passed away.
link |
01:36:12.440
He was really a fantastic military officer, combining all the best traits that we in America admire in our military.
link |
01:36:20.440
Because first of all, he was very loyal, of course.
link |
01:36:23.440
He never even told anyone about this during his whole life, even though you think he had some bragging rights, right?
link |
01:36:28.440
But he just was like, this is just business, just doing my job.
link |
01:36:31.440
It only came out later after his death.
link |
01:36:33.440
And second, the reason he did the right thing was not because he was some sort of liberal,
link |
01:36:39.440
not because he was just, oh, you know, peace and love.
link |
01:36:46.440
It was partly because he had been the captain on another submarine that had a nuclear reactor meltdown.
link |
01:36:52.440
And it was his heroism that helped contain this.
link |
01:36:57.440
That's why he died of cancer later also.
link |
01:36:59.440
But he's seen many of his crew members die.
link |
01:37:01.440
And I think for him, that gave him this gut feeling that, you know,
link |
01:37:04.440
if there's a nuclear war between the US and the Soviet Union, the whole world is going to go through
link |
01:37:10.440
what I saw my dear crew members suffer through.
link |
01:37:13.440
It wasn't just an abstract thing for him.
link |
01:37:15.440
I think it was real.
link |
01:37:17.440
And second, though, not just the gut, the mind, right?
link |
01:37:20.440
He was, for some reason, very level headed personality and very smart guy,
link |
01:37:25.440
which is exactly what we want our best fighter pilots to be also.
link |
01:37:29.440
I never forget Neil Armstrong when he's landing on the moon and almost running out of gas.
link |
01:37:34.440
And he doesn't even change, let me say 30 seconds.
link |
01:37:37.440
He doesn't even change the tone of voice, just keeps going.
link |
01:37:39.440
Archipelago, I think, was just like that.
link |
01:37:41.440
So when the explosions start going off and his captain is screaming and we should nuke them and all,
link |
01:37:46.440
he's like, I don't think the Americans are trying to sink us.
link |
01:37:53.440
I think they're trying to send us a message.
link |
01:37:57.440
That's pretty badass.
link |
01:37:59.440
Coolness.
link |
01:38:00.440
Because he said, if they wanted to sink us, he said, listen, listen,
link |
01:38:05.440
it's alternating one loud explosion on the left, one on the right, one on the left, one on the right.
link |
01:38:12.440
He was the only one to notice this pattern.
link |
01:38:15.440
And he's like, I think this is them trying to send us a signal that they wanted to surface
link |
01:38:22.440
and they're not going to sink us.
link |
01:38:25.440
And somehow this is how he then managed it ultimately with his combination of gut
link |
01:38:34.440
and also just cool analytical thinking, was able to deescalate the whole thing.
link |
01:38:40.440
And yeah, so this is some of the best in humanity.
link |
01:38:44.440
I guess coming back to what we talked about earlier is the combination of the neural network,
link |
01:38:47.440
the instinctive, you know, with I'm tearing up here, getting emotional.
link |
01:38:51.440
But he is one of my superheroes having both the heart and the mind combined.
link |
01:39:00.440
And especially in that time, there's something about the, I mean, this is a very,
link |
01:39:04.440
in America, people are used to this kind of idea of being the individual of like on your own thinking.
link |
01:39:11.440
I think in the Soviet Union under communism, it's actually much harder to do that.
link |
01:39:17.440
Oh yeah, he didn't even, he even got, he didn't get any accolades either when he came back for this, right?
link |
01:39:23.440
They just wanted to hush the whole thing up.
link |
01:39:25.440
Yeah, there's echoes of that with Chernobyl, there's all kinds of, that's one,
link |
01:39:32.440
that's a really hopeful thing that amidst big centralized powers,
link |
01:39:37.440
whether it's companies or states, there's still the power of the individual to think on their own to act.
link |
01:39:43.440
But I think we need to think of people like this, not as a panacea we can always count on,
link |
01:39:49.440
but rather as a wake up call, you know.
link |
01:39:54.440
So because of them, because of Arkhipov, we are alive to learn from this lesson,
link |
01:40:00.440
to learn from the fact that we shouldn't keep playing Russian roulette
link |
01:40:03.440
and almost have a nuclear war by mistake now and then,
link |
01:40:06.440
because relying on luck is not a good long term strategy.
link |
01:40:09.440
If you keep playing Russian roulette over and over again,
link |
01:40:11.440
the probability of surviving just drops exponentially with time.
link |
01:40:14.440
And if you have some probability of having an accidental nuclear war every year,
link |
01:40:18.440
the probability of not having one also drops exponentially.
link |
01:40:21.440
I think we can do better than that.
link |
01:40:23.440
So I think the message is very clear, once in a while shit happens,
link |
01:40:27.440
and there's a lot of very concrete things we can do to reduce the risk of things like that happening in the first place.
link |
01:40:36.440
On the AI front, if we just link on that for a second.
link |
01:40:40.440
So you're friends with, you often talk with Elon Musk throughout history.
link |
01:40:45.440
You've did a lot of interesting things together.
link |
01:40:48.440
He has a set of fears about the future of artificial intelligence, AGI.
link |
01:40:55.440
Do you have a sense, we've already talked about the things we should be worried about with AI.
link |
01:41:01.440
Do you have a sense of the shape of his fears in particular,
link |
01:41:05.440
about AI, which subset of what we've talked about, whether it's creating,
link |
01:41:13.440
it's that direction of creating these giant computational systems that are not explainable,
link |
01:41:19.440
they're not intelligible intelligence, or is it the...
link |
01:41:26.440
And then as a branch of that, is it the manipulation by big corporations of that,
link |
01:41:31.440
or individual evil people to use that for destruction, or the unintentional consequences.
link |
01:41:37.440
Do you have a sense of where his thinking is on this?
link |
01:41:40.440
From my many conversations with Elon, I certainly have a model of how he thinks.
link |
01:41:47.440
It's actually very much like the way I think also, I'll elaborate on it a bit.
link |
01:41:51.440
I just want to push back on when you said evil people.
link |
01:41:54.440
I don't think it's a very helpful concept, evil people.
link |
01:41:59.440
Sometimes people do very, very bad things, but they usually do it because they think it's a good thing,
link |
01:42:05.440
because somehow other people had told them that that was a good thing,
link |
01:42:08.440
or given them incorrect information, or whatever.
link |
01:42:15.440
I believe in the fundamental goodness of humanity that if we educate people well,
link |
01:42:21.440
and they find out how things really are, people generally want to do good and be good.
link |
01:42:28.440
There's a sense of value alignment.
link |
01:42:30.440
It's about information, it's about knowledge, and then once we have that,
link |
01:42:35.440
we'll likely be able to do good in the way that's aligned with everybody else who thinks it's good.
link |
01:42:42.440
Yeah, and it's not just the individual people we have to align,
link |
01:42:45.440
so we don't just want people to be educated to know the way things actually are,
link |
01:42:51.440
and to treat each other well, but we also need to align other nonhuman entities.
link |
01:42:56.440
We've talked about corporations, there has to be institutions,
link |
01:42:58.440
so that what they do is actually good for the country they're in,
link |
01:43:01.440
and we should make sure that what countries do is actually good for the species as a whole, etc.
link |
01:43:07.440
Coming back to Elon, my understanding of how Elon sees this is really quite similar to my own,
link |
01:43:15.440
which is one of the reasons I like him so much, and enjoy talking with him so much.
link |
01:43:19.440
He's quite different from most people in that he thinks much more than most people about their really big picture,
link |
01:43:29.440
not just what's going to happen in the next election cycle,
link |
01:43:32.440
and millennia, millions and billions of years from now.
link |
01:43:36.440
When you look in this more cosmic perspective, it's so obvious that we're gazing out into this universe
link |
01:43:42.440
that as far as we can tell is mostly dead, with life being almost imperceptibly tiny perturbation, right?
link |
01:43:49.440
And he sees this enormous opportunity for our universe to come alive,
link |
01:43:53.440
first to become an interplanetary species.
link |
01:43:55.440
Mars is obviously just first stop on this cosmic journey,
link |
01:44:01.440
and precisely because he thinks more long term, it's much more clear to him than to most people
link |
01:44:08.440
that what we do with this Russian roulette thing we keep playing with our nukes is a really poor strategy,
link |
01:44:14.440
a really reckless strategy, and also that we're just building these ever more powerful AI systems
link |
01:44:19.440
that we don't understand is also a really reckless strategy.
link |
01:44:23.440
I feel Elon is a humanist in the sense that he wants an awesome future for humanity.
link |
01:44:30.440
He wants it to be us that control the machines, rather than the machines that control us.
link |
01:44:38.440
And why shouldn't we insist on that? We're building them after all, right?
link |
01:44:44.440
Why should we build things that just make us into some little cog in the machinery
link |
01:44:48.440
that has no further say in the matter, right?
link |
01:44:50.440
It's not my idea of an inspiring future either.
link |
01:44:54.440
Yeah, if you think on the cosmic scale in terms of both time and space, so much is put into perspective.
link |
01:45:03.440
Whenever I have a bad day, that's what I think about. It immediately makes me feel better.
link |
01:45:09.440
It makes me sad that for us individual humans, at least for now, the ride ends too quickly.
link |
01:45:16.440
We don't get to experience the cosmic scale.
link |
01:45:20.440
I mean, I think of our universe sometimes as an organism that has only begun to wake up a tiny bit.
link |
01:45:25.440
Just like the very first little glimmers of consciousness you have in the morning when you start coming around.
link |
01:45:31.440
Before the coffee.
link |
01:45:32.440
Before the coffee. Even before you get out of bed, before you even open your eyes, start to wake up a little bit.
link |
01:45:39.440
There's something here, you know. That's very much how I think of where we are.
link |
01:45:46.440
All those galaxies out there, I think they're really beautiful.
link |
01:45:50.440
But why are they beautiful?
link |
01:45:52.440
They're beautiful because conscious entities are actually observing them and experiencing them through our telescopes.
link |
01:45:59.440
I define consciousness as subjective experience.
link |
01:46:05.440
Whether it be colors or emotions or sounds.
link |
01:46:09.440
So beauty is an experience.
link |
01:46:12.440
Meaning is an experience.
link |
01:46:13.440
Purpose is an experience.
link |
01:46:15.440
If there was no conscious experience observing these galaxies, they wouldn't be beautiful.
link |
01:46:19.440
If we do something dumb with advanced AI in the future here and Earth originating, life goes extinct.
link |
01:46:28.440
And that was it for this.
link |
01:46:30.440
If there is nothing else with telescopes in our universe, then it's kind of game over for beauty and meaning and purpose in our whole universe.
link |
01:46:37.440
And I think that would be just such an opportunity lost, frankly.
link |
01:46:41.440
And I think when Elon points this out, he gets very unfairly maligned in the media for all the dumb media bias reasons we talked about, right?
link |
01:46:52.440
They want to print precisely the things about Elon out of context that are really clickbaity.
link |
01:46:58.440
Like he has gotten so much flak for this summoning the demon statement.
link |
01:47:04.440
I happen to know exactly the context because I was in the front row when he gave that talk. It was at MIT, you'll be pleased to know.
link |
01:47:11.440
It was the AeroAstro anniversary.
link |
01:47:13.440
They had Buzz Aldrin there from the moon landing, a whole house, a Kresge auditorium packed with MIT students.
link |
01:47:20.440
And he had this amazing Q&A.
link |
01:47:22.440
It might have gone for an hour and they talked about rockets and Mars and everything.
link |
01:47:26.440
At the very end, this one student who has actually hit my class asked him, what about AI?
link |
01:47:32.440
Elon makes this one comment and they take this out of context, print it, goes viral.
link |
01:47:39.440
Was it like with AI where summoning the demons and stuff like that?
link |
01:47:42.440
And try to cast him as some sort of doom and gloom dude.
link |
01:47:46.440
You know Elon.
link |
01:47:48.440
He's not the doom and gloom dude.
link |
01:47:51.440
He is such a positive visionary.
link |
01:47:53.440
And the whole reason he warns about this is because he realizes more than most what the opportunity cost is of screwing up.
link |
01:47:59.440
That there is so much awesomeness in the future that we can and our descendants can enjoy if we don't screw up, right?
link |
01:48:07.440
I get so pissed off when people try to cast him as some sort of technophobic Luddite.
link |
01:48:14.440
And at this point, it's kind of ludicrous when I hear people say that people who worry about artificial general intelligence are Luddites.
link |
01:48:24.440
Because of course, if you look more closely, you have some of the most outspoken people making warnings.
link |
01:48:32.440
Are people like Professor Stuart Russell from Berkeley who's written the best selling AI textbook, you know.
link |
01:48:38.440
So claiming that he is a Luddite who doesn't understand AI is the joke is really on the people who said it.
link |
01:48:46.440
But I think more broadly, this message is really not sunk in at all.
link |
01:48:50.440
What it is that people worry about.
link |
01:48:52.440
They think that Elon and Stuart Russell and others are worried about the dancing robots picking up an AR15 and going on a rampage, right?
link |
01:49:03.440
They think they're worried about robots turning evil.
link |
01:49:07.440
They're not. I'm not.
link |
01:49:09.440
The risk is not malice. It's competence.
link |
01:49:15.440
The risk is just that we build some systems that are incredibly competent, which means they're always going to get their goals accomplished.
link |
01:49:21.440
Even if they clash with our goals.
link |
01:49:23.440
That's the risk.
link |
01:49:25.440
Why did we humans, you know, drive the West African black rhino extinct?
link |
01:49:31.440
Is it because we're malicious, evil rhinoceros haters?
link |
01:49:35.440
No, it's just because our goals didn't align with the goals of those rhinos and tough luck for the rhinos, you know.
link |
01:49:42.440
So the point is just we don't want to put ourselves in the position of those rhinos creating these something more powerful than us.
link |
01:49:51.440
If we haven't first figured out how to align the goals and I am optimistic.
link |
01:49:55.440
I think we could do it if we worked really hard on it because I spent a lot of time around intelligent entities that were more intelligent than me.
link |
01:50:02.440
My mom and my dad and I was little and that was fine because their goals were actually aligned with mine quite well.
link |
01:50:11.440
But we've seen today many examples of where the goals of our powerful systems are not so aligned.
link |
01:50:17.440
So those click through optimization algorithms that are polarized social media, right?
link |
01:50:24.440
They were actually pretty poorly aligned with what was good for democracy turned out.
link |
01:50:29.440
And again, almost all problems we've had in machine learning again came so far, not from Alice, but from poor alignment.
link |
01:50:35.440
And it's that's exactly why that's why we should be concerned about in the future.
link |
01:50:39.440
Do you think it's possible that with systems like Neuralink and brain computer interfaces, you know, again, thinking of the cosmic scale,
link |
01:50:49.440
Elon's talked about this, but others have as well throughout history of figuring out how the exact mechanism of how to achieve that kind of alignment.
link |
01:50:59.440
So one of them is having a symbiosis with AI, which is like coming up with clever ways where we're like stuck together.
link |
01:51:06.440
And this weird relationship, whether it's biological or in some kind of other way, do you think there's that's a possibility of having that kind of symbiosis?
link |
01:51:18.440
Or do we want to instead kind of focus on this distinct entities of us humans talking to these intelligible, self doubting AIs,
link |
01:51:31.440
maybe like Stuart Russell thinks about it, like these, we're we're self doubting and full of uncertainty and have our AI systems are full of uncertainty.
link |
01:51:39.440
We communicate back and forth and in that way achieves symbiosis.
link |
01:51:44.440
I honestly don't know. I would say that because we don't know for sure what if any of our which of any of our ideas will work.
link |
01:51:51.440
But we do know that if we don't, I'm pretty convinced that if we don't get any of these things to work and just barge ahead, then our species is, you know, probably going to go extinct this century.
link |
01:52:03.440
I think this century, you think like you think we're facing this crisis is a 21st century crisis.
link |
01:52:11.440
This century will be remembered on a hard drive somewhere or maybe by future generations is like,
link |
01:52:20.440
like there will be future future of life as a two awards for people that have done something about AI.
link |
01:52:28.440
It could also end even worse, whether we're not superseded by leaving any AI behind either.
link |
01:52:34.440
We just totally wipe out, you know, like on Easter Island.
link |
01:52:37.440
Our century is long.
link |
01:52:38.440
No, there are still 79 years left of it.
link |
01:52:42.440
Think about how far we've come just in the last 30 years.
link |
01:52:46.440
So we can talk more about what might go wrong.
link |
01:52:52.440
But you asked me this really good question about what's the best strategy?
link |
01:52:55.440
Is it Neuralink or Russell's approach or whatever?
link |
01:52:59.440
I think, you know, when we did the Manhattan Project, we didn't know if any of our four ideas for enriching uranium and getting out the uranium 235 were going to work.
link |
01:53:12.440
But we felt this was really important to get it before Hitler did.
link |
01:53:16.440
So you know what we did?
link |
01:53:17.440
We tried all four of them here.
link |
01:53:19.440
I think it's analogous where there's the greatest threat that's ever faced our species and of course US National Security by implication.
link |
01:53:28.440
We don't know if we don't have any method that's guaranteed to work, but we have a lot of ideas.
link |
01:53:34.440
So we should invest pretty heavily in pursuing all of them with an open mind and hope that one of them at least works.
link |
01:53:40.440
These are the good news is the century is long, you know, and it might take decades until we have artificial general intelligence.
link |
01:53:50.440
So we have some time, hopefully, but it takes a long time to solve these very, very difficult problems.
link |
01:53:57.440
It's going to actually be the most difficult problem we were ever trying to solve as a species.
link |
01:54:01.440
So we have to start now.
link |
01:54:03.440
So we don't want to have it rather than, you know, begin thinking about it the night before some people who've had too much Red Bull switch it on.
link |
01:54:10.440
And we have to coming back to your question, we have to pursue all of these different avenues and see if you're my investment advisor and I was trying to invest in the future.
link |
01:54:19.440
How do you think the human species is most likely to destroy itself in the century?
link |
01:54:29.440
Yeah, so if the crises, many of the crises we're facing are really before us within the next hundred years, how do we make explicit, make known the unknowns and solve those problems to avoid the biggest starting with the biggest existential crisis.
link |
01:54:51.440
So as your investment advisor, how are you planning to make money on us destroying ourselves?
link |
01:54:56.440
I don't know.
link |
01:54:57.440
It might be the Russian origins that somehow is involved.
link |
01:55:02.440
At the micro level of detailed strategies, of course, these are unsolved problems.
link |
01:55:08.440
For AI alignment, we can break it into three sub problems that are all unsolved.
link |
01:55:13.440
I think you want first to make machines understand our goals, then adopt our goals and then retain our goals.
link |
01:55:23.440
So hit on all three real quickly.
link |
01:55:27.440
The problem when Andreas Lubitz told his autopilot to fly into the Alps was that the computer didn't even understand anything about his goals, right?
link |
01:55:38.440
It was too dumb.
link |
01:55:40.440
It could have understood actually, but we would have had to put some effort in as a system designer to don't fly into mountains.
link |
01:55:48.440
So that's the first challenge. How do you program into computers human values, human goals?
link |
01:55:55.440
Rather than saying, oh, it's so hard, we should start with the simple stuff, as I said.
link |
01:56:01.440
Self driving cars, airplanes, just put in all the goals that we all agree on already and then have a habit of whenever a machine gets smarter so they can understand one level higher goals, you know, put them into.
link |
01:56:16.440
The second challenge is getting them to adopt the goals.
link |
01:56:20.440
It's easy for situations like that where you just program it in, but when you have self learning systems like children, you know, any parent knows that there is a difference between getting our kids to understand what we want them to do and to actually adopt our goals, right?
link |
01:56:37.440
With humans, with children, fortunately, they go through this phrase, first they're too dumb to understand what we want our goals are, and then they have this period of some years when they're both smart enough to understand them and malleable enough that we have a chance to raise them well,
link |
01:56:55.440
and then they become teenagers, kind of too late, but we have this window with machines, the challenges, the intelligence might grow so fast that that window is pretty short.
link |
01:57:06.440
So that's a research problem.
link |
01:57:08.440
The third one is how do you make sure they keep the goals?
link |
01:57:11.440
If they keep learning more and getting smarter.
link |
01:57:14.440
Many sci fi movies are about how you have something which initially was aligned, but then things kind of go off keel and, you know, my kids were very, very excited about their Legos when they were little.
link |
01:57:27.440
And now they're just gathering dust in the basement, you know, if we create machines that are really on board with a goal of taking care of humanity, we don't want them to get as bored with us and as my kids got with Legos.
link |
01:57:39.440
So this is another research challenge.
link |
01:57:41.440
How can you make some sort of recursively self improving system retain certain basic goals?
link |
01:57:47.440
That said, a lot of adult people still play with Legos, so maybe we succeeded with Legos.
link |
01:57:53.440
I like your optimism.
link |
01:57:55.440
So not all AI systems have to maintain the goals, right?
link |
01:57:58.440
Some just some fraction.
link |
01:58:00.440
Yeah.
link |
01:58:01.440
So there's there's a lot of talented AI researchers now who have heard of this and want to work on it.
link |
01:58:07.440
Not so much funding for it yet.
link |
01:58:10.440
Of the billions that go into building AI more powerful.
link |
01:58:14.440
It's only a miniscule fraction.
link |
01:58:16.440
So for going into the safety research, my attitude is generally we should not try to slow down the technology, but we should greatly accelerate the investment in this sort of safety research.
link |
01:58:24.440
And also make sure it's been it's this was very embarrassing last year, but you know, the NSF decided to give out six of these big institutes.
link |
01:58:33.440
We got one of them for AI and science.
link |
01:58:35.440
You asked me about another one was supposed to be for a safety research.
link |
01:58:39.440
And they gave it to people studying oceans and climate and stuff.
link |
01:58:44.440
Yeah.
link |
01:58:46.440
So I'm all for studying oceans and climates, but we need to actually have some money that actually goes into a safety research also and doesn't just get grabbed.
link |
01:58:53.440
By whatever.
link |
01:58:55.440
That's a fantastic investment.
link |
01:58:57.440
And then at the higher level, you asked this question, OK, what can we do?
link |
01:59:01.440
You know, what are the biggest risks?
link |
01:59:04.440
I think I think we cannot just consider this to be only a technical problem.
link |
01:59:10.440
Again, because if you solve only the technical problem, can I play with your robot?
link |
01:59:14.440
Yes, please.
link |
01:59:16.440
Get our machines, you know, to just blindly obey the orders we give them.
link |
01:59:22.440
So we can always trust that it will do what we want.
link |
01:59:25.440
That might be great for the owner of the robot.
link |
01:59:28.440
It might not be so great for the rest of humanity if if that person is that least favorite world leader or whatever you imagine, right?
link |
01:59:36.440
So we have to also take a look at the apply alignment, not just to machines, but to all the other powerful structures.
link |
01:59:44.440
That's why it's so important to strengthen our democracy.
link |
01:59:46.440
Again, as I said, to have institutions make sure that the playing field is not rigged so that corporations are given the right incentives to do the things that both make profit and are good for people.
link |
01:59:58.440
To make sure that countries have incentives to do things that are both good for their people and don't screw up the rest of the world.
link |
02:00:06.440
And this is not just something for AI nerds to geek out on.
link |
02:00:10.440
This is an interesting challenge for political scientists, economists, and so many other thinkers.
link |
02:00:16.440
So one of the magical things that perhaps makes this earth quite unique is that it's home to conscious beings.
link |
02:00:28.440
So you mentioned consciousness.
link |
02:00:30.440
Perhaps as a small aside, because we didn't really get specific to what how we might do the alignment, like you said, is there just a really important research problem?
link |
02:00:41.440
But do you think engineering consciousness into AI systems is a possibility?
link |
02:00:49.440
Is something that we might one day do or is there something fundamental to consciousness that is fundamental to humans and humans only?
link |
02:01:02.440
I think it's possible.
link |
02:01:04.440
I think both consciousness and intelligence are information processing, certain types of information processing.
link |
02:01:12.440
And that fundamentally, it doesn't matter whether the information is processed by carbon atoms in neurons and brains or by silicon atoms and so on in our technology.
link |
02:01:26.440
Some people disagree.
link |
02:01:28.440
This is what I think is as physicists that I and the consciousness is the same kind of you said consciousness is information processing.
link |
02:01:37.440
So meaning, I think you had a quote of something like it's information knowing itself, that kind of thing.
link |
02:01:47.440
I think consciousness is the way information feels when it's being processed in certain complex ways.
link |
02:01:53.440
We don't know exactly what those complex ways are.
link |
02:01:55.440
It's clear that most of the information processing in our brains does not create an experience.
link |
02:02:01.440
We're not even aware of it.
link |
02:02:03.440
For example, you're not aware of your heartbeat regulation right now, even though it's clearly being done by your body.
link |
02:02:10.440
It's just kind of doing its own thing.
link |
02:02:12.440
When you go jogging, there's a lot of complicated stuff about how you put your foot down.
link |
02:02:17.440
And we know it's hard.
link |
02:02:19.440
That's why robots used to fall over so much.
link |
02:02:21.440
But you're mostly unaware about it.
link |
02:02:23.440
Your brain, your CEO consciousness module just sends an email, hey, I want to keep jogging along this path.
link |
02:02:29.440
The rest is on autopilot.
link |
02:02:31.440
So most of it is not conscious.
link |
02:02:33.440
But somehow there is some of the information processing, which is we don't know what exactly.
link |
02:02:41.440
I think this is a science problem that I hope one day we'll have some equation for or something.
link |
02:02:48.440
So we can be able to build a consciousness detector and say, yeah, here there is some consciousness.
link |
02:02:52.440
Here there is not.
link |
02:02:53.440
Oh, don't boil that lobster because it's feeling pain or it's okay because it's not feeling pain.
link |
02:02:59.440
Right now we treat this as sort of just metaphysics.
link |
02:03:02.440
But it would be very useful in emergency rooms to know if a patient has locked in syndrome and is conscious,
link |
02:03:10.440
or if they are actually just out.
link |
02:03:14.440
And in the future, if you build a very, very intelligent helper robot to take care of you,
link |
02:03:19.440
I think you'd like to know if you should feel guilty about shutting it down,
link |
02:03:23.440
or if it's just like a zombie going through the motions like a fancy tape recorder.
link |
02:03:29.440
And once we can make progress on the science of consciousness and figure out what is conscious and what isn't,
link |
02:03:38.440
then assuming we want to create positive experiences and not suffering,
link |
02:03:47.440
we'll probably choose to build some machines that are deliberately unconscious that do incredibly boring,
link |
02:03:55.440
repetitive jobs in an iron mine somewhere or whatever.
link |
02:03:59.440
And maybe we'll choose to create helper robots for the elderly that are conscious so that people don't just feel creeped out,
link |
02:04:07.440
that the robot is just faking it when it acts like it's sad or happy.
link |
02:04:12.440
Like you said, elderly, I think everybody gets pretty deeply lonely in this world.
link |
02:04:17.440
And so there's a place, I think, for everybody to have a connection with conscious beings,
link |
02:04:21.440
whether they're human or otherwise.
link |
02:04:24.440
But I know for sure that I would, if I had a robot, if I was going to develop any kind of personal emotional connection with it,
link |
02:04:32.440
I would be very creeped out if I knew it in intellectual level that the whole thing was just a fraud.
link |
02:04:36.440
You know, today you can buy a little talking doll for a kid,
link |
02:04:42.440
which will say things and the little child will often think that this is actually conscious
link |
02:04:47.440
and even real secrets to it that then go on the internet with all sorts of creepy repercussions.
link |
02:04:52.440
You know, I would not want to be just hacked and tricked like this.
link |
02:04:57.440
If I was going to be developing real emotional connections with a robot,
link |
02:05:02.440
I would want to know that this is actually real.
link |
02:05:06.440
It's acting conscious, acting happy because it actually feels it.
link |
02:05:09.440
And I think this is not sci fi.
link |
02:05:12.440
It's possible to measure, to come up with tools.
link |
02:05:15.440
After we understand the science of consciousness, you're saying we'll be able to come up with tools that can measure consciousness
link |
02:05:21.440
and definitively say like this thing is experiencing the things it says it's experiencing.
link |
02:05:27.440
Kind of by definition, if it is a physical phenomena, information processing,
link |
02:05:31.440
and we know that some information processing is conscious and some isn't,
link |
02:05:34.440
well then there is something there to be discovered with the methods of science.
link |
02:05:37.440
Giulio Tononi has stuck his neck out the farthest and written down some equations for a theory.
link |
02:05:43.440
Maybe that's right, maybe it's wrong, we certainly don't know.
link |
02:05:46.440
But I applaud that kind of efforts to sort of take this,
link |
02:05:50.440
say this is not just something that philosophers can have beer and muse about,
link |
02:05:56.440
but something we can measure and study and coming, being that back to us.
link |
02:06:00.440
I think what we would probably choose to do, as I said, is if we cannot figure this out,
link |
02:06:05.440
choose to be quite mindful about what sort of consciousness, if any, we put in different machines that we have.
link |
02:06:14.440
And certainly, we wouldn't want to make, we should not be making much of machines that suffer without us even knowing it, right?
link |
02:06:23.440
And if at any point someone decides to upload themselves like Ray Kurzweil wants to do, I don't know if you've had him on your show.
link |
02:06:31.440
We agree, but then COVID happens, so we're waiting it out a little bit.
link |
02:06:34.440
You know, suppose he uploads himself into this robo array, and it talks like him, and acts like him, and laughs like him,
link |
02:06:42.440
and before he powers off his biological body, he would probably be pretty disturbed if he realized that there's no one home.
link |
02:06:49.440
This robot is not having any subjective experience, right?
link |
02:06:53.440
If humanity gets replaced by machine descendants, do all these cool things, and build spaceships, and go to intergalactic rock concerts,
link |
02:07:05.440
and it turns out that they are all unconscious, just going through the motions.
link |
02:07:11.440
Wouldn't that be like the ultimate zombie apocalypse, right? Just a play for empty benches?
link |
02:07:17.440
Yeah, I have a sense that there's some kind of, once we understand consciousness better,
link |
02:07:22.440
we'll understand that there's some kind of continuum, and it would be a greater appreciation.
link |
02:07:27.440
And we'll probably understand, just like you said, it'd be unfortunate if it's a trick.
link |
02:07:31.440
We'll probably definitely understand that love is indeed a trick that will play on each other, that we humans are.
link |
02:07:38.440
We convince ourselves we're conscious, but we're really, you know, awesome trees and dolphins are all the same kind of consciousness.
link |
02:07:46.440
Can I try to cheer you up a little bit with a philosophical thought here about the love part?
link |
02:07:50.440
Yes, let's do it.
link |
02:07:52.440
You might say, okay, love is just a collaboration enabler, and then maybe you can go and get depressed about that.
link |
02:08:01.440
But I think that would be the wrong conclusion, actually.
link |
02:08:04.440
I know that the only reason I enjoy food is because my genes hacked me, and they don't want me to starve to death,
link |
02:08:13.440
not because they care about me consciously enjoying succulent delights of pistachio ice cream,
link |
02:08:20.440
but they just want me to make copies of them.
link |
02:08:23.440
So in a sense, the whole enjoyment of food is also a scam, like this.
link |
02:08:28.440
But does that mean I shouldn't take pleasure in this pistachio ice cream?
link |
02:08:32.440
I love pistachio ice cream, and I can tell you, I know this is an experimental fact.
link |
02:08:38.440
I enjoy pistachio ice cream every bit as much, even though I scientifically know exactly what kind of scam this was.
link |
02:08:46.440
Your genes really appreciate that you like the pistachio ice cream.
link |
02:08:50.440
Well, but I, my mind appreciates it too, you know, and I have a conscious experience right now.
link |
02:08:55.440
Ultimately, all of my brain is also just something the genes built to copy themselves.
link |
02:09:00.440
But so what, you know, I'm grateful that, yeah, thanks genes for doing this,
link |
02:09:04.440
but, you know, now it's my brain that's in charge here, and I'm going to enjoy my conscious experience.
link |
02:09:09.440
Thank you very much.
link |
02:09:10.440
And not just the pistachio ice cream, but also the love I feel for my amazing wife and all the other delights of being conscious.
link |
02:09:19.440
I don't, actually Richard Feynman, I think said this so well.
link |
02:09:24.440
He is also the guy, you know, really got me into physics.
link |
02:09:28.440
Some art friend said that, oh, science kind of just is the party pooper.
link |
02:09:34.440
It kind of ruins the fun, right?
link |
02:09:36.440
When like, you have a beautiful flowers as the artist, and then the scientist is going to deconstruct that into just a blob of quarks and electrons.
link |
02:09:43.440
And Feynman pushed back on that in such a beautiful way, which I think also can be used to push back and make you not feel guilty about falling in love.
link |
02:09:52.440
So here's what Feynman basically said.
link |
02:09:54.440
He said to his friend, you know, yeah, I can also, as a scientist, see that this is a beautiful flower.
link |
02:09:59.440
Thank you very much.
link |
02:10:00.440
Maybe I can't draw as good a painting as you because I'm not as talented an artist, but yeah, I can really see the beauty in it.
link |
02:10:06.440
And it just, it also looks beautiful to me.
link |
02:10:08.440
But in addition to that, Feynman said, as a scientist, I see even more beauty that the artist did not see, right?
link |
02:10:16.440
Suppose this is a flower on a blossoming apple tree, you could say this tree has more beauty in it than just the colors and the fragrance.
link |
02:10:25.440
This tree is made of air, Feynman wrote.
link |
02:10:28.440
This is one of my favorite Feynman quotes ever.
link |
02:10:30.440
And it took the carbon out of the air and bound it in using the flaming heat of the sun, you know, to turn the air into tree.
link |
02:10:37.440
And when you burn logs in your fireplace, it's really beautiful to think that this is being reversed.
link |
02:10:44.440
Now the tree is going, the wood is going back into air and in this flaming, beautiful dance of the fire that the artist can see is the flaming light of the sun that was bound in to turn the air into tree.
link |
02:10:58.440
And then the ashes is the little residue that didn't come from the air, that the tree sucked out of the ground, you know.
link |
02:11:03.440
Feynman said, these are beautiful things and science just adds.
link |
02:11:07.440
It doesn't subtract.
link |
02:11:09.440
And I feel exactly that way about love and about pistachio ice cream also.
link |
02:11:15.440
I can understand that there is even more nuance to the whole thing, right?
link |
02:11:20.440
At this very visceral level, you can fall in love just as much as someone who knows nothing about neuroscience.
link |
02:11:27.440
But you can also appreciate this even greater beauty in it.
link |
02:11:32.440
Isn't it remarkable that it came about from this completely lifeless universe, just a bunch of hot blob of plasma expanding?
link |
02:11:42.440
And then over the eons, you know, gradually, first the strong nuclear force decided to combine quarks together into nuclei and then the electric force bound in electrons and made atoms.
link |
02:11:53.440
And then they clustered it from gravity and you've got planets and stars and this and that.
link |
02:11:57.440
And then natural selection came along and the genes had their little thing and you started getting what went from seeming like a completely pointless universe.
link |
02:12:06.440
So we're just trying to increase entropy and approach heat depth into something that looked more goal oriented.
link |
02:12:11.440
Isn't that kind of beautiful?
link |
02:12:13.440
And then this goal orientedness through evolution got ever more sophisticated where you got ever more.
link |
02:12:18.440
And then you started getting this thing which is kind of like DeepMind's Mu Zero and steroids, the ultimate self play is not what DeepMind's AI does against itself to get better at the go.
link |
02:12:31.440
It's what all these little cork blobs did against each other in the game of survival of the fittest.
link |
02:12:38.440
Now, when you had really dumb bacteria living in a simple environment, there wasn't much incentive to get intelligent, but then the life made environment more complex.
link |
02:12:50.440
And then there was more incentive to get even smarter.
link |
02:12:53.440
And that gave the other organisms more incentive to also get smarter.
link |
02:12:57.440
And then here we are now just like Mu Zero learned to become world master at the go and chess from playing against itself by just playing against itself.
link |
02:13:08.440
All the quarks here on our planet and electrons have created giraffes and elephants and humans and love.
link |
02:13:17.440
I just find that really beautiful.
link |
02:13:20.440
And I think that just adds to the enjoyment of love.
link |
02:13:23.440
It doesn't subtract anything.
link |
02:13:25.440
Do you feel a little more careful now?
link |
02:13:27.440
I feel way better.
link |
02:13:28.440
That was incredible.
link |
02:13:30.440
So this self play of quarks, taking back to the beginning of our conversation a little bit, there's so many exciting possibilities about artificial intelligence understanding the basic laws of physics.
link |
02:13:43.440
Do you think AI will help us unlock?
link |
02:13:46.440
There's been quite a bit of excitement throughout the history of physics of coming up with more and more general simple laws that explain the nature of our reality.
link |
02:13:57.440
And then the ultimate of that would be a theory of everything that combines everything together.
link |
02:14:03.440
Do you think it's possible that one we humans, but perhaps AI systems will figure out a theory of physics that unifies all the laws of physics?
link |
02:14:16.440
Yeah, I think it's absolutely possible.
link |
02:14:19.440
I think it's very clear that we're going to see a great boost to science.
link |
02:14:24.440
We're already seeing a boost actually from machine learning, helping science. Alpha fold was an example, you know, decades old protein folding problem.
link |
02:14:33.440
And gradually, yeah, unless we go extinct by doing something dumb like we discussed, I think it's very likely that our understanding of physics will become so good that our technology will no longer be limited by human
link |
02:14:53.440
intelligence, but instead be limited by the laws of physics.
link |
02:14:57.440
So our tech today is limited by what we've been able to invent, right?
link |
02:15:01.440
I think as AI progresses, it'll just be limited by the speed of light and other physical limits, which would mean it's going to be just dramatically beyond, you know, where we are now.
link |
02:15:14.440
Do you think it's a fundamentally mathematical pursuit of trying to understand like the laws of the governing our universe from a mathematical perspective?
link |
02:15:25.440
It's almost like if it's AI, it's exploring the space of like theorems and those kinds of things.
link |
02:15:32.440
Or is there some other more computational ideas, more sort of empirical ideas?
link |
02:15:40.440
They're both, I would say, it's really interesting to look out at the landscape of everything we call science today.
link |
02:15:47.440
So here you come now with this big new hammer, it says machine learning on it and ask, you know, where are there some nails that you can help with here that you can hammer?
link |
02:15:55.440
Ultimately, if machine learning gets the point that it can do everything better than us, it will be able to help across the whole space of science.
link |
02:16:05.440
But maybe we can anchor it by starting a little bit right now near term and see how we kind of move forward.
link |
02:16:11.440
So like right now, first of all, you have a lot of big data science where, for example, with telescopes, we are able to collect way more data every hour than a grad student can just pour over like in the old times, right?
link |
02:16:28.440
And machine learning is already being used very effectively, even at MIT, to find planets around other stars, to detect exciting new signatures of new particle physics in the sky,
link |
02:16:38.440
to detect the ripples in the fabric of space time that we call gravitational waves caused by enormous black holes crashing into each other halfway across the observable universe.
link |
02:16:49.440
Machine learning is running and taking it right now, you know, doing all these things and it's really helping all these experimental fields.
link |
02:16:58.440
There is a separate front of physics, computational physics, which is getting an enormous boost also.
link |
02:17:05.440
So we had to do all our computations by hand, right?
link |
02:17:09.440
People would have these giant books with tables of logarithms and oh my God, it pains me to even think how long time it would have taken to do simple stuff.
link |
02:17:19.440
Then we started to get calculators and computers that could do some basic math for us.
link |
02:17:26.440
Now what we're starting to see is kind of a shift from go fi computational physics to neural network computational physics.
link |
02:17:40.440
What I mean by that is most computational physics would be done by humans programming in the intelligence of how to do the computation into the computer.
link |
02:17:52.440
Just as when Gary Kasparov got his posterior kicked by IBM's Deep Blue in chess, humans had programmed in exactly how to play chess.
link |
02:17:59.440
Intelligence came from the humans, it wasn't learned, right?
link |
02:18:03.440
Mu zero can be not only Kasparov in chess, but also stock fish, which is the best go fi chess program by learning.
link |
02:18:15.440
And we're seeing more of that now, that shift beginning to happen in physics. So let me give you an example.
link |
02:18:22.440
So lattice QCD is an area of physics whose goal is basically to take the periodic table and just compute the whole thing from first principles.
link |
02:18:32.440
This is not the search for theory of everything.
link |
02:18:35.440
We already know the theory that's supposed to produce as output the periodic table, which atoms are stable, how heavy they are, all that good stuff.
link |
02:18:44.440
These are spectral lines. It's a theory, lattice QCD, you can put it on your t shirt, our colleague Frank Wilczek got the Nobel Prize for working on it.
link |
02:18:54.440
But the math is just too hard for us to solve. We have not been able to start with these equations and solve them to the extent that we can predict, oh yeah.
link |
02:19:02.440
And then there is carbon, and this is what the spectrum of the carbon atom looks like.
link |
02:19:07.440
But awesome people are building these super computer simulations where you just put in these equations and you make a big cubic lattice of space.
link |
02:19:20.440
Or actually it's a very small lattice because you're going down to the subatomic scale and you try to solve it.
link |
02:19:26.440
But it's just so computationally expensive that we still haven't been able to calculate things as accurately as we measure them in many cases.
link |
02:19:34.440
And now machine learning is really revolutionizing this.
link |
02:19:37.440
So my colleague Fiola Shanahan at MIT, for example, she's been using this really cool machine learning technique called normalizing flows,
link |
02:19:47.440
where she's realized she can actually speed up the calculation dramatically by having the AI learn how to do things faster.
link |
02:19:55.440
Another area like this where we suck up an enormous amount of super computer time to do physics is black hole collisions.
link |
02:20:06.440
So now that we've done the sexy stuff of detecting a bunch of this with LIGO and other experiments, we want to be able to know what we're seeing.
link |
02:20:14.440
And so it's a very simple conceptual problem. It's the two body problem.
link |
02:20:19.440
Newton solved it for classical gravity hundreds of years ago, but the two body problem is still not fully solved.
link |
02:20:27.440
For black holes.
link |
02:20:28.440
Yes, a nice thing is gravity because they won't just orbit each other forever anymore, two things.
link |
02:20:33.440
They give off gravitational waves and make sure they crash into each other.
link |
02:20:37.440
And the game, what you want to do is you want to figure out, okay, what kind of wave comes out as a function of the masses of the two black holes,
link |
02:20:45.440
as a function of how they're spinning relative to each other, etc.
link |
02:20:49.440
And that is so hard.
link |
02:20:51.440
It can take months of super computer time on massive numbers of cores to do it, you know.
link |
02:20:56.440
Wouldn't it be great if you can use machine learning to greatly speed that up, right?
link |
02:21:03.440
Now you can use the expensive old go fi calculation as the truth and then see if machine learning can figure out a smarter, faster way of getting the right answer.
link |
02:21:14.440
Yet another area, like computational physics, these are probably the big three that suck up the most computer time.
link |
02:21:23.440
Lattice QCD, black hole collisions and cosmological simulations, where you take not a subatomic thing and try to figure out the mass of the proton,
link |
02:21:33.440
but you take something that's enormous and try to look at how all the galaxies get formed in there.
link |
02:21:40.440
There again, there are a lot of very cool ideas right now about how you can use machine learning to do this sort of stuff better.
link |
02:21:49.440
The difference between this and the big data is you kind of make the data yourself, right?
link |
02:21:56.440
And then finally, we're looking over the physical landscape and seeing what can we hammer with machine learning, right?
link |
02:22:02.440
So we talked about experimental data, big data, discovering cool stuff that we humans then look more closely at.
link |
02:22:09.440
Then we talked about taking the expensive computations we're doing now and figuring out how to do the much faster and better with AI.
link |
02:22:18.440
And finally, let's go really theoretical.
link |
02:22:21.440
So things like discovering equations, having deep fundamental insights.
link |
02:22:28.440
This is something closest to what I've been doing in my group.
link |
02:22:33.440
We talked earlier about the whole AI Feynman project, where if you just have some data, how do you automatically discover equations that seem to describe this well,
link |
02:22:42.440
that you can then go back as a human and work with and test and explore.
link |
02:22:47.440
And you asked a really good question also about if this is sort of a search problem in some sense.
link |
02:22:54.440
That's very deep, actually, what you said, because it is.
link |
02:22:57.440
Suppose I asked you to prove some mathematical theorem.
link |
02:23:02.440
What is a proof in math? It's just a long string of steps, logical steps that you can write out with symbols.
link |
02:23:08.440
And once you find it, it's very easy to write a program to check whether it's a valid proof or not.
link |
02:23:15.440
So why is it so hard to prove it then?
link |
02:23:17.440
Well, because there are ridiculously many possible candidate proofs you could write down, right?
link |
02:23:22.440
If the proof contains 10,000 symbols, even if there are only 10 options for what each symbol could be, that's 10 to the power of 1,000 possible proofs,
link |
02:23:33.440
which is way more than there are atoms in our universe, right?
link |
02:23:36.440
So you could say it's trivial to prove these things.
link |
02:23:38.440
You just write a computer, generate all strings, and then check, is this a valid proof? No.
link |
02:23:44.440
Is this a valid proof? No.
link |
02:23:47.440
And then you just keep doing this forever.
link |
02:23:51.440
But it is fundamentally a search problem.
link |
02:23:54.440
You just want to search the space of all strings of symbols to find the one that is the proof, right?
link |
02:24:02.440
And there's a whole area of machine learning called search.
link |
02:24:08.440
How do you search through some giant space to find the needle in the haystack?
link |
02:24:12.440
It's easier in cases where there's a clear measure of good, like you're not just right or wrong,
link |
02:24:18.440
but this is better and this is worse, so you can maybe get some hints as to which direction to go in.
link |
02:24:23.440
That's why we talked about neural networks work so well.
link |
02:24:27.440
I mean, that's such a human thing of that moment of genius of figuring out the intuition of good, essentially.
link |
02:24:37.440
I mean, we thought that that was...
link |
02:24:38.440
What is it?
link |
02:24:39.440
Maybe it's not, right? We thought that about chess, right?
link |
02:24:42.440
Exactly.
link |
02:24:43.440
That the ability to see 10, 15, sometimes 20 steps ahead was not a calculation that humans were performing.
link |
02:24:51.440
It was some kind of weird intuition about different patterns, about board positions, about the relative positions.
link |
02:24:58.440
Exactly.
link |
02:24:59.440
Somehow stitching stuff together and a lot of it is just intuition.
link |
02:25:03.440
But then you have Alpha, I guess, Zero be the first one that did the self play.
link |
02:25:10.440
It just came up with this. It was able to learn through self play mechanism, this kind of intuition.
link |
02:25:15.440
Exactly.
link |
02:25:16.440
But just as you said, it's so fascinating to think whether in the space of totally new ideas, can that be done in developing theorems?
link |
02:25:28.440
We know it can be done by neural networks because we did it with the neural networks in the craniums of the great mathematicians of humanity, right?
link |
02:25:36.440
And I'm so glad you brought up Alpha, Zero, because that's the counter example.
link |
02:25:40.440
It turned out we were flattering ourselves when we said intuition is something different.
link |
02:25:45.440
It's only humans can do it. It's not the information processing.
link |
02:25:48.440
If you... If it used to be that way, again, it's very...
link |
02:25:54.440
It's really instructive, I think, to compare the chess computer, Deep Blue, that beat Kasparov with Alpha, Zero, that beat Lisa Dahl at the go.
link |
02:26:04.440
Because for Deep Blue, there was no intuition.
link |
02:26:08.440
There was some pro... Humans had programmed in some intuition.
link |
02:26:11.440
After humans had played a lot of games, they told the computer, you know, count the pawn as one point, the bishop is three points, the rook is five points, and so on.
link |
02:26:20.440
You add it all up and then you add some extra points for past pawns and subtract if the opponent has it and blah, blah, blah, blah.
link |
02:26:27.440
And then what Deep Blue did was just search.
link |
02:26:32.440
Just very brute force tried many, many moves ahead, all these combinations and a prune tree search, and it could think much faster than Kasparov and it won, right?
link |
02:26:42.440
And that, I think, inflated our egos in a way it shouldn't have, because people started to say, yeah, yeah, it's just brute force search, but it has no intuition.
link |
02:26:51.440
Alpha, Zero really popped our bubble there, because what Alpha, Zero does...
link |
02:27:00.440
Yes, it does also do some of that tree search, but it also has this intuition module, which in GeekSpeak is called a value function, where it just looks at the board and comes up with a number for how good is that position.
link |
02:27:14.440
The difference was no human told it how good the position is.
link |
02:27:19.440
It just learned it.
link |
02:27:21.440
And Mu Zero is the coolest or scariest of all, depending on your mood, because the same basic AI system will learn what the good board position is, regardless of whether it's chess or Go or Shogi or Pacman or Lady Pacman or Breakout or Space Invaders or any number, a bunch of other games.
link |
02:27:44.440
You don't tell it anything and it gets this intuition after a while for what's good.
link |
02:27:49.440
So this is very hopeful for science, I think, because if it can get intuition for what's a good position there, maybe it can also get intuition for what are some good directions to go if you're trying to prove something.
link |
02:28:02.440
I often, one of the most fun things in my science career is when I've been able to prove some theorem about something, and it's very heavily intuition guided, of course. I don't sit and try all random strings. I have a hunch that, you know, this reminds me a little bit about this other proof I've seen for this thing.
link |
02:28:19.440
So maybe I, first, what if I try this? No, that didn't work out. But this reminds me, actually, the way this failed reminds me of that. So combining the intuition with all these brute force capabilities, I think it's going to be able to help physics too.
link |
02:28:38.440
Do you think there will be a day when an AI system being the primary contributor, let's say 90% plus wins the Nobel Prize in physics? Obviously, they'll give it to the humans because we humans don't like to give prizes to machines.
link |
02:28:54.440
It'll give it to the humans behind the system. You could argue that AI has already been involved in some Nobel prizes, probably maybe something with black holes and stuff like that.
link |
02:29:03.440
Yeah, we don't like giving prizes to other life forms. If someone wins a horse racing contest, they don't give the prize to the horse either.
link |
02:29:11.440
That's true. But do you think that we might be able to see something like that in our lifetimes when AI? So like the first system, I would say, that makes us think about a Nobel Prize seriously is like Alpha Fold is making us think about in medicine, physiology, a Nobel Prize.
link |
02:29:31.440
Perhaps discoveries that are direct result of something that's discovered by Alpha Fold. Do you think in physics, we might be able to see that in our lifetimes?
link |
02:29:41.440
I think what's probably going to happen is more of a blurring of the distinctions. So today, if somebody uses a computer to do a computation that gives them the Nobel Prize, nobody's going to dream of giving the prize to the computer.
link |
02:29:57.440
Maybe like that was just a tool. I think for these things also, people are just going to for a long time view the computer as a tool. But what's going to change is the ubiquity of machine learning.
link |
02:30:11.440
I think at some point in my lifetime, finding a human physicist who knows nothing about machine learning is going to be about almost as hard as it is today finding a human physicist who doesn't says, Oh, I don't know anything about computers, or I don't use math.
link |
02:30:30.440
It would just be a ridiculous concept.
link |
02:30:33.440
But the thing is, there is a magic moment, though, like with Alpha Zero, when the system surprises us in a way where the best people in the world truly learn something from the system in a way where you feel like it's another entity.
link |
02:30:52.440
The way people, the way Magnus Carlson, the way certain people are looking at the work of Alpha Zero, it truly is no longer a tool in the sense that it doesn't feel like a tool. It feels like some other entity.
link |
02:31:08.440
So there is a magic difference where you're like, if an AI system is able to come up with an insight that surprises everybody in some major way that's a phase shift in our understanding of some particular science or some particular aspect of physics,
link |
02:31:29.440
I feel like that is no longer a tool. And then you can start to say that it perhaps deserves the prize.
link |
02:31:38.440
So for sure, the more important, the more fundamental transformation of the 21st century science is exactly what you're saying, which is probably everybody will be doing machine learning.
link |
02:31:50.440
It's to some degree, like if you want to be successful at unlocking the mysteries of science, you should be doing machine learning. But it's just exciting to think about like whether there'll be one that comes along that's super surprising and they'll make a question like who the real inventors are in this world.
link |
02:32:10.440
Yeah. Yeah, I think the question of isn't if it's going to happen, but when and but it's important also in my mind, the time when that happens is also more or less the same time when we get artificial general intelligence.
link |
02:32:25.440
And then we have a lot bigger things to worry about than whether it should get the Nobel Prize or not, right? Because when you have machines that can outperform our best scientists at science, they can probably outperform us at a lot of other stuff as well,
link |
02:32:43.440
which can at a minimum, you know, make them incredibly powerful agents in the world, you know. And I think it's a mistake to think we only have to start worrying about loss of control when machines get to AGI across the board where they can do everything, all our jobs.
link |
02:33:01.440
Long before that, they'll be hugely influential. We talked at length about how the hacking of our minds with algorithms trying to get us glued to our screens, right, has already had a big impact on society.
link |
02:33:22.440
There was an incredibly dumb algorithm in the grand scheme of things, right, just supervised machine learning, yet it had had huge impact. So I just don't want us to be lulled into false sense of security and think there won't be any societal impact until things reach human level because it's happening already.
link |
02:33:38.440
And I was just thinking the other week, you know, when I see some scaremonger going, oh, the robots are coming. The implication is always that they're coming to kill us.
link |
02:33:50.440
And maybe you should have worried about that if you were in Nagorno Karabakh during the recent war there. But more seriously, the robots are coming right now, but they're mainly not coming to kill us. They're coming to hack us.
link |
02:34:05.440
They're coming to hack our minds into buying things that maybe we didn't need to vote for people who may not have our best interest in mind. And it's kind of humbling, I think, actually, as a human being to admit that it turns out that our minds are actually much more hackable than we thought.
link |
02:34:24.440
And the ultimate insult is that we are actually getting hacked by the machine learning algorithms that are in some objective sense much dumber than us, you know. But maybe we shouldn't be so surprised because, you know, how do you feel about the cute puppies?
link |
02:34:40.440
Love them.
link |
02:34:41.440
So, you know, you would probably argue that in some across the board measure, you're more intelligent than they are. But boy, our cute puppies good at hacking us, right? Yeah, they move into our house, persuade us to feed them and do all these things.
link |
02:34:54.440
What do they ever do for us? Yeah, other than being cute and making us feel good, right? So if puppies can hack us, maybe we shouldn't be so surprised if pretty dumb machine learning algorithms can hack us too.
link |
02:35:08.440
Not to speak of cats, which is another level. And I think we should, to counter your previous point about there, let us not think about evil creatures in this world. We can all agree that cats are as close to objective evil as we can get.
link |
02:35:22.440
But that's just me saying that. Have you seen the cartoon? I think it's maybe the onion with this incredibly cute kitten. And it just says underneath something that thinks about murder all day. Exactly.
link |
02:35:41.440
That's accurate. You mentioned offline that there might be a link between post biological AGI and SETI.
link |
02:35:47.440
So last time we talked, you've talked about this intuition that we humans might be quite unique in our galactic neighborhood, perhaps our galaxy, perhaps the entirety of the observable universe.
link |
02:36:06.440
You might be the only intelligence civilization here, which is, and you argue pretty well for that thought. So I have a few little questions around this one, the scientific question, in which way would you be, if you were wrong in that intuition,
link |
02:36:33.440
in which way do you think you would be surprised? Like why were you wrong? We find out that you ended up being wrong. Like in which dimension? So like, is it because we can't see them? Is it because the nature of their intelligence or the nature of their life is totally different than we can possibly imagine?
link |
02:36:56.440
Is it because the, I mean, something about the great filters and surviving them? Or maybe because we're being protected from signals? All those explanations for why we haven't heard a big loud like red light that says we're here.
link |
02:37:20.440
Yeah. So there are actually two separate things there that I could be wrong about, two separate claims that I made, right? One of them is, I made the claim, I think most civilizations, going from simple bacteria like things to space colonizing civilizations,
link |
02:37:47.440
they spend only a very, very tiny fraction of their other life being where we are. That I could be wrong about. The other one I could be wrong about is quite different statement that I think that actually I'm guessing that we are the only
link |
02:38:03.440
civilization in our observable universe from which light has reached us so far, that's actually gotten far enough to invent telescopes. So let's talk about maybe both of them in turn because they really are different.
link |
02:38:14.440
The first one, if you look at the n equals one, the data point we have on this planet, right? So we spent four and a half billion years flexing around on this planet with life, right?
link |
02:38:27.440
We got, and most of it was pretty lame stuff from an intelligence perspective, you know, as bacteria and then the dinosaurs spent, then the things gradually accelerated, right? Then the dinosaurs spent over 100 million years stomping around here without even inventing smartphones.
link |
02:38:46.440
And then very recently, you know, it's only, we've only spent 400 years going from Newton to us, right? In terms of technology and we've looked at what we've done even, you know, when I was a little kid, there was no internet even.
link |
02:39:02.440
So it's, I think it's pretty likely for in this case of this planet, right, that we're either going to really get our act together and start spreading life into space the century and doing all sorts of great things or we're going to wipe out.
link |
02:39:18.440
It's a little hard. If I could be wrong in the sense that maybe what happened on this earth is very atypical. And for some reason, what's more common on other planets is that they spend an enormously long time futzing around with the ham radio and things, but they just never really take it to the next level for reasons I don't have.
link |
02:39:37.440
I haven't understood and I'm humble and open to that. But I would bet at least 10 to one that our situation is more typical because the whole thing with Moore's law and accelerating technology, it's pretty obvious why it's happening.
link |
02:39:50.440
Everything that grows exponentially, we call it an explosion, whether it's a population explosion or a nuclear explosion, it's always caused by the same thing. It's that the next step triggers a step after that.
link |
02:40:01.440
Today's technology enables tomorrow's technology and that enables the next level because the technology is always better, of course, the steps can come faster and faster.
link |
02:40:16.440
On the other question that I might be wrong about, that's the much more controversial one, I think.
link |
02:40:22.440
Before we close out on this thing about the first one, if it's true that most civilizations spend only a very short amount of their total time in the stage, say, between inventing telescopes or mastering electricity and doing space travel,
link |
02:40:43.440
if that's actually generally true, but then that should apply also elsewhere out there. So we should be very, very surprised if we find some random civilization and we happen to catch them exactly in that very, very short stage.
link |
02:40:58.440
It's much more likely that we find a planet full of bacteria.
link |
02:41:02.440
Yes. Or that we find some civilization that's already post biological and has done some really cool galactic construction projects in their galaxy.
link |
02:41:13.440
Would we be able to recognize them, do you think? Is it possible that we just can't? I mean, this post biological world, could it be just existing in some other dimension?
link |
02:41:23.440
Could it be just all a virtual reality game for them or something? I don't know. That it changes completely where we won't be able to detect?
link |
02:41:32.440
We have to be, honestly, very humble about this. I think I said earlier the number one principle being scientists is you have to be humble and willing to acknowledge that everything we think, guess, might be totally wrong.
link |
02:41:44.440
Of course, you could imagine some civilization where they all decide to become Buddhists and very inward looking and just move into their little virtual reality and not disturb the flora and fauna around them and we might not notice them.
link |
02:41:57.440
But this is a numbers game, right? If you have millions of civilizations out there or billions of them, all it takes is one with a more ambitious mentality that decides, hey, we are going to go out and settle
link |
02:42:13.440
for a bunch of other solar systems and maybe galaxies. And then it doesn't matter if they're a bunch of quiet Buddhists, we're still going to notice that expansionist one, right?
link |
02:42:22.440
And it seems like quite the stretch to assume that, you know, we know even in our own galaxy that there are probably a billion or more planets that are pretty Earth like and many of them were formed over a billion years before ours.
link |
02:42:38.440
So I had a big head start. So if you actually assume also that life happens kind of automatically on an Earth like planet, I think it's pretty quite the stretch to then go and say, okay, so we have other billions of and other billion civilizations out there that also have our level of tech and they all decided to become Buddhists
link |
02:42:58.440
and not a single one decided to go like go Hitler on the galaxy and say we need to go out and colonize or and or not and not a single one decided for more benevolent reasons to go out and get more resources.
link |
02:43:10.440
That seems seems like a bit of a stretch, frankly, and this leads into the the second thing you challenge me to be that I might be wrong about how rare or common is life.
link |
02:43:21.440
You know, so Francis Drake, when he wrote down the Drake equation, multiplied together a huge number of factors, and then we don't know any of them.
link |
02:43:29.440
So we know even less about what you get when you multiply together the whole product.
link |
02:43:34.440
Since then, a lot of those factors have become much better known.
link |
02:43:38.440
One of his big uncertainties was how common is it that a solar system even has a planet.
link |
02:43:43.440
Right.
link |
02:43:44.440
Well, now we know a very common Earth like planets.
link |
02:43:47.440
We know where but our diamond doesn't there are many, many of them even in our galaxy.
link |
02:43:51.440
At the same time, you know, we have thanks to I'm a big supporter of the set the project and its cousins.
link |
02:43:58.440
And I think we should keep doing this and we've learned a lot.
link |
02:44:01.440
We've we've learned that so far, all we have is still unconvincing hints, nothing more.
link |
02:44:07.440
Right.
link |
02:44:08.440
And there are certainly many scenarios where it will be dead obvious.
link |
02:44:12.440
If there were 100 million other human like civilizations in our galaxy, it would not be that hard to notice some of them with today's technology.
link |
02:44:22.440
And we haven't.
link |
02:44:23.440
Right.
link |
02:44:24.440
So so what we can what we can say is, well, OK, we can rule out that there is a human level of civilization on the moon.
link |
02:44:32.440
And in fact, the many nearby solar systems where we we cannot rule out, of course, that there is something like Earth sitting in a galaxy.
link |
02:44:41.440
Five billion light years away.
link |
02:44:45.440
But we've ruled out a lot.
link |
02:44:47.440
And that's already kind of shocking, given that there are all these planets there, you know, so like, where are they?
link |
02:44:52.440
Where are they all?
link |
02:44:53.440
That's the that's the classic Fermi paradox.
link |
02:44:55.440
Yeah.
link |
02:44:56.440
And so my argument, which might very wrong, it's very simple, really just goes like this.
link |
02:45:02.440
OK, we have no clue about this.
link |
02:45:06.440
It could be the probability of getting life on a random planet that could be 10 to the minus one a priori or 10 to the minus 10, 10 to minus 20, 10 to minus 30, 10 to minus 40.
link |
02:45:18.440
Basically, every order of magnitude is about equally likely.
link |
02:45:22.440
When then do the math and ask how close is our nearest neighbor?
link |
02:45:26.440
It's again equally likely that it's 10 to the 10 meters away, 10 to 20 meters away, 10 to the 30 meters away.
link |
02:45:32.440
We can we have some nerdy ways of talking about this with Bayesian statistics and a uniform log prior, but that's irrelevant.
link |
02:45:38.440
This is the simple basic argument.
link |
02:45:41.440
And now comes the data so we can say, OK, how many were there are all these orders of magnitude 10 to the 26 meters away?
link |
02:45:48.440
There's the edge of our observable universe.
link |
02:45:51.440
If it's farther than that light hasn't even reached us yet.
link |
02:45:54.440
If it's less than 10 to the 16 meters away, well, it's within Earth's rate.
link |
02:46:01.440
It's no farther away than the sun.
link |
02:46:03.440
We can definitely rule that out.
link |
02:46:05.440
So I think about it like this.
link |
02:46:08.440
A priori, before we looked with telescopes, it could be 10 meters, 10 to the 20, 10 to the 30, 10 to the 40, 10 to the 50, 10 to the blah blah blah.
link |
02:46:16.440
Equally likely anywhere here.
link |
02:46:18.440
And now we've ruled out this chunk.
link |
02:46:22.440
And most of it is outside.
link |
02:46:25.440
And here is the edge of our observable universe already.
link |
02:46:28.440
So I'm certainly not saying I don't think there's any life elsewhere in space.
link |
02:46:32.440
If space is infinite, then you're basically 100% guaranteed that there is.
link |
02:46:36.440
But the probability that there is life, that the nearest neighbor happens to be in this little region between where we would have seen it already and where we will never see it.
link |
02:46:48.440
There's actually significantly less than one, I think.
link |
02:46:51.440
And I think there's a moral lesson from this, which is really important.
link |
02:46:55.440
Which is to be good stewards of this planet and this shot we've had.
link |
02:47:01.440
It can be very dangerous to say, oh, it's fine if we nuke our planet or ruin the climate or mess it up with unaligned AI.
link |
02:47:10.440
Because I know there is this nice Star Trek fleet out there.
link |
02:47:15.440
They're going to swoop in and take over where we failed.
link |
02:47:18.440
It wasn't the big deal that the Easter Island losers wiped themselves out.
link |
02:47:23.440
It's a dangerous way of loading yourself into false sense of security.
link |
02:47:27.440
If it's actually the case that it might be up to us and only us, the whole future of intelligent life in our observable universe, then I think it really puts a lot of responsibility on our shoulders.
link |
02:47:43.440
It's a little bit terrifying, but it's also inspiring.
link |
02:47:46.440
But it's empowering, I think, most of all.
link |
02:47:48.440
Because the biggest problem today is, I see this even when I teach.
link |
02:47:52.440
So many people feel that it doesn't matter what they do or we do.
link |
02:47:57.440
We feel disempowered.
link |
02:47:58.440
Oh, it makes no difference.
link |
02:48:02.440
This is about as far from that as you can come up and realize that what we do on our little spinning ball here in our lifetime could make the difference for the entire future of life in our universe.
link |
02:48:16.440
How empowering is that?
link |
02:48:18.440
Yeah, survival of consciousness.
link |
02:48:22.440
A very similar kind of empowering aspect of the Drake equation is, say there is a huge number of intelligent civilizations that spring up everywhere.
link |
02:48:32.440
But because of the Drake equation, which is the lifetime of a civilization, maybe many of them hit a wall.
link |
02:48:39.440
And just like you said, it's clear that for us, the great filter, the one possible great filter seems to be coming in the next 100 years.
link |
02:48:50.440
So it's also empowering to say, okay, well, we have a chance to not.
link |
02:48:57.440
I mean, the way great filters work, it just gets most of them.
link |
02:49:01.440
Exactly.
link |
02:49:02.440
Nick Bostrom has articulated this really beautifully too.
link |
02:49:05.440
You know, every time yet another search for life on Mars comes back negative or something, I'm like, yes!
link |
02:49:13.440
Yes!
link |
02:49:14.440
Our odds for us surviving is the best.
link |
02:49:17.440
You already made the argument in broad brush there, right?
link |
02:49:20.440
But just the uncockets, right?
link |
02:49:22.440
The point is, we already know there is a crap ton of planets out there that are Earth like.
link |
02:49:29.440
And we also know that most of them do not seem to have anything like our kind of life on them.
link |
02:49:34.440
So what went wrong?
link |
02:49:36.440
There's clearly one step along the evolutionary, at least one filter roadblock in going from no life to spacefaring life.
link |
02:49:44.440
And where is it?
link |
02:49:47.440
Is it in front of us or is it behind us, right?
link |
02:49:50.440
If there's no filter behind us and we keep finding all sorts of little mice on Mars and whatever, right?
link |
02:50:01.440
That's actually very depressing because that makes it much more likely that the filter is in front of us.
link |
02:50:05.440
And what actually is going on is like the ultimate dark joke that whenever a civilization invents sufficiently powerful tech,
link |
02:50:15.440
it's just set their clock and then after a while it goes poof for one reason or other and wipes itself out.
link |
02:50:21.440
Wouldn't that be like utterly depressing if we're actually doomed?
link |
02:50:25.440
Whereas if it turns out that there is a great filter early on that for whatever reason seems to be really hard to get to the stage of
link |
02:50:35.440
sexually reproducing organisms or even the first ribosome or whatever, right?
link |
02:50:43.440
Or maybe you have lots of planets with dinosaurs and cows, but for some reason they tend to get stuck there and never invent smartphones.
link |
02:50:50.440
All of those are huge boosts for our own odds because being there done that, you know.
link |
02:50:58.440
It doesn't matter how hard or unlikely it was that we got past that roadlock because we already did.
link |
02:51:05.440
And then that makes it likely that the filter is in our own hands. We're not doomed.
link |
02:51:11.440
So that's why I think the fact that life is rare in the universe is not just something that there is some evidence for,
link |
02:51:21.440
but also something we should actually hope for.
link |
02:51:26.440
So that's the end, the mortality, the death of human civilization that we've been discussing in life, maybe prospering beyond any kind of great filter.
link |
02:51:36.440
Do you think about your own death? Does it make you sad that you may not witness some of the...
link |
02:51:45.440
You lead a research group on working some of the biggest questions in the universe, actually, both on the physics and the AI side.
link |
02:51:53.440
Does it make you sad that you may not be able to see some of these exciting things come to fruition that we've been talking about?
link |
02:52:00.440
Of course. Of course it sucks, the fact that I'm going to die. I remember when I was much younger,
link |
02:52:07.440
my dad made this remark that life is fundamentally tragic and I'm like,
link |
02:52:11.440
why are you talking about that again?
link |
02:52:13.440
Many years later, now I feel I totally understand what he means.
link |
02:52:17.440
We grow up, we're little kids and everything is infinite and it's so cool and then suddenly we find out that actually,
link |
02:52:25.440
you're going to get game over at some point. So of course it's something that's sad.
link |
02:52:35.440
Are you afraid?
link |
02:52:42.440
No, not in the sense that I think anything terrible is going to happen after I die or anything like that.
link |
02:52:48.440
I think it's really going to be a game over, but it's more that it makes me very acutely aware of what a wonderful gift this is that it gets to be alive right now
link |
02:52:59.440
and is a steady reminder to just live life to the fullest and really enjoy it because it is finite.
link |
02:53:07.440
We all get regular reminders when someone near and dear to us dies that one day it's going to be our turn.
link |
02:53:19.440
It adds this kind of focus. I wonder what it would feel like actually to be an immortal being
link |
02:53:25.440
if they might even enjoy some of the wonderful things of life a little bit less because there isn't that...
link |
02:53:33.440
finiteness. Do you think that could be a feature, not a bug, the fact that we beings are finite?
link |
02:53:41.440
Maybe there's lessons for engineering and artificial intelligence systems as well that are conscious.
link |
02:53:48.440
Do you think it makes... Is it possible that the reason the pistachio ice cream is delicious is the fact that you're going to die one day
link |
02:53:59.440
and you will not have all the pistachio ice cream that you could eat because of that fact?
link |
02:54:06.440
Well, let me say two things. First of all, it's actually quite profound what you're saying.
link |
02:54:10.440
I do think I appreciate the pistachio ice cream a lot more knowing that there's only a finite number of times I get to enjoy that
link |
02:54:17.440
and I can only remember a finite number of times in the past.
link |
02:54:21.440
Moreover, my life is not so long that it just starts to feel like things are repeating themselves in general.
link |
02:54:28.440
It's so new and fresh.
link |
02:54:30.440
I also think, though, that death is a little bit overrated in the sense that it comes from an outdated view of physics
link |
02:54:43.440
and what life actually is because if you ask, okay, what is it that's going to die exactly?
link |
02:54:49.440
What am I really?
link |
02:54:51.440
When I say I feel sad about the idea of myself dying, am I really sad that the skin cell here is going to die?
link |
02:54:58.440
Of course not because it's going to die next week anyway and I'll grow a new one, right?
link |
02:55:03.440
And it's not any of my cells that I'm associating really with who I really am
link |
02:55:10.440
nor is it any of my atoms or quarks or electrons.
link |
02:55:14.440
In fact, basically all of my atoms get replaced on a regular basis, right?
link |
02:55:20.440
So what is it that's really me from a more modern physics perspective?
link |
02:55:24.440
It's the information in processing Amy.
link |
02:55:28.440
That's where my memories, that's my memories, that's my values, my dreams, my passion, my love.
link |
02:55:39.440
That's what's really fundamentally me and frankly, not all of that will die when my body dies.
link |
02:55:49.440
Like Richard Feynman, for example, his body died of cancer, you know?
link |
02:55:54.440
But many of his ideas that he felt made him very him actually live on.
link |
02:56:00.440
This is my own little personal tribute to Richard Feynman, right?
link |
02:56:03.440
I try to keep a little bit of him alive in myself.
link |
02:56:06.440
I've even quoted him today, right?
link |
02:56:08.440
Yeah, he almost came alive for a brief moment in this conversation.
link |
02:56:12.440
Yeah, and this honestly gives me some solace.
link |
02:56:16.440
You know, when I work as a teacher, I feel if I can actually share a bit about myself,
link |
02:56:25.440
that my students feel worthy enough to copy and adopt a part of things that they know or they believe or aspire to.
link |
02:56:35.440
Now I live on also a little bit in them, right?
link |
02:56:39.440
And so being a teacher is a little bit of what I...
link |
02:56:48.440
That's something also that contributes to making me a little teeny bit less mortal, right?
link |
02:56:55.440
Because I'm not at least not all going to die all at once, right?
link |
02:56:59.440
And I find that a beautiful tribute to people we do not respect.
link |
02:57:02.440
If we can remember them and carry in us the things that we felt was the most awesome about them, right?
link |
02:57:12.440
Then they live on.
link |
02:57:14.440
And I'm getting a bit emotional here, but it's a very beautiful idea you bring up there.
link |
02:57:19.440
I think we should stop this old fashioned materialism and just equate who we are with our quirks and electrons.
link |
02:57:27.440
There's no scientific basis for that, really.
link |
02:57:30.440
And it's also very uninspiring.
link |
02:57:34.440
Now, if you look a little bit towards the future, right?
link |
02:57:40.440
One thing which really sucks about humans dying is that even though some of their teachings and memories and stories and ethics and so on
link |
02:57:49.440
will be copied by those around them, hopefully, a lot of it can't be copied and just dies with them, with a brain.
link |
02:57:55.440
And that really sucks. That's the fundamental reason why we find it so tragic when someone goes from having all this information there to just gone ruined, right?
link |
02:58:07.440
With more post biological intelligence, that's going to shift a lot, right?
link |
02:58:14.440
The only reason it's so hard to make a backup of your brain in its entirety is exactly because it wasn't built for that, right?
link |
02:58:21.440
If you have a future machine intelligence, there's no reason for why it has to die at all if it wants to copy it into some other quirk blob, right?
link |
02:58:36.440
You can copy not just some of it, but all of it, right?
link |
02:58:39.440
And so in that sense, you can get immortality because all the information can be copied out of any individual entity.
link |
02:58:51.440
And it's not just mortality that will change if we get more post biological life.
link |
02:58:56.440
It's also with that very much the whole individualism we have now, right?
link |
02:59:03.440
The reason that we make such a big difference between me and you is exactly because we're a little bit limited in how much we can copy.
link |
02:59:10.440
Like, I would just love to go like this and copy your Russian skills, Russian speaking skills.
link |
02:59:16.440
Wouldn't it be awesome? But I can't. I have to actually work for years to get better on it.
link |
02:59:23.440
But if we were robots, just copy and paste freely, then that loses completely. It washes away the sense of what immortality is.
link |
02:59:34.440
And also individuality a little bit, right? We would start feeling much more…
link |
02:59:39.440
Maybe we would feel much more collaborative with each other if we can just say,
link |
02:59:44.440
hey, you can give me your Russian and I'll give you whatever. And suddenly you can speak Swedish. Maybe that's a bad trade for you, but whatever else you want from my brain, right?
link |
02:59:54.440
And there have been a lot of sci fi stories about hive minds and so on where experiences can be more broadly shared.
link |
03:00:04.440
And I think we don't… I don't pretend to know what it would feel like to be a super intelligent machine,
link |
03:00:16.440
but I'm quite confident that however it feels about mortality and individuality will be very, very different from how it is for us.
link |
03:00:24.440
Well, for us, mortality and finiteness seems to be pretty important at this particular moment.
link |
03:00:34.440
And so all good things must come to an end just like this conversation, Max.
link |
03:00:39.440
I saw that coming.
link |
03:00:40.440
Sorry, this is the world's worst transition. I could talk to you forever. It's such a huge honor that you've spent time with me.
link |
03:00:48.440
My honor is mine. Thank you so much for getting me essentially to start this podcast by doing the first conversation,
link |
03:00:55.440
making me realize falling in love with conversation in itself.
link |
03:01:00.440
And thank you so much for inspiring so many people in the world with your books, with your research, with your talking and with other…
link |
03:01:09.440
like this ripple effect of friends including Elon and everybody else that you inspire. So thank you so much for talking today.
link |
03:01:17.440
Thank you. I feel so fortunate that you're doing this podcast and getting so many interesting voices out there into the ether,
link |
03:01:28.440
and not just the five second sound bites, but so many of the interviews of what you do.
link |
03:01:32.440
You really let people go into depth in a way which we sorely need in this day and age, and that I got to be number one. I feel super honored.
link |
03:01:41.440
Yeah, you started it. Thank you so much, Max.
link |
03:01:44.440
Thanks for listening to this conversation with Max Tegmark, and thank you to our sponsors, The Jordan Harbinger Show,
link |
03:01:51.440
ForSigmatic Mushroom Coffee, BetterHelp Online Therapy, and ExpressVPN.
link |
03:01:58.440
So the choice is Wisdom, Caffeine, Sanity, or Privacy. Choose wisely, my friends.
link |
03:02:05.440
And if you wish, click the sponsor links below to get a discount and to support this podcast.
link |
03:02:11.440
And now let me leave you with some words from Max Tegmark.
link |
03:02:14.440
If consciousness is the way that information feels when it's processed in certain ways, then it must be substrate independent.
link |
03:02:23.440
It's only the structure of information processing that matters, not the structure of the matter doing the information processing.
link |
03:02:31.440
Thank you for listening, and hope to see you next time.