back to indexMax Tegmark: AI and Physics | Lex Fridman Podcast #155
link |
The following is a conversation with Max Tegmark,
link |
his second time on the podcast.
link |
In fact, the previous conversation
link |
was episode number one of this very podcast.
link |
He is a physicist and artificial intelligence researcher
link |
at MIT, cofounder of the Future of Life Institute,
link |
and author of Life 3.0,
link |
Being Human in the Age of Artificial Intelligence.
link |
He's also the head of a bunch of other huge,
link |
fascinating projects and has written
link |
a lot of different things
link |
that you should definitely check out.
link |
He has been one of the key humans
link |
who has been outspoken about longterm existential risks
link |
of AI and also its exciting possibilities
link |
and solutions to real world problems.
link |
Most recently at the intersection of AI and physics,
link |
and also in reengineering the algorithms
link |
that divide us by controlling the information we see
link |
and thereby creating bubbles and all other kinds
link |
of complex social phenomena that we see today.
link |
In general, he's one of the most passionate
link |
and brilliant people I have the fortune of knowing.
link |
I hope to talk to him many more times
link |
on this podcast in the future.
link |
Quick mention of our sponsors,
link |
The Jordan Harbinger Show,
link |
Four Sigmatic Mushroom Coffee,
link |
BetterHelp Online Therapy, and ExpressVPN.
link |
So the choices, wisdom, caffeine, sanity, or privacy.
link |
Choose wisely, my friends, and if you wish,
link |
click the sponsor links below to get a discount
link |
and to support this podcast.
link |
As a side note, let me say that much of the researchers
link |
in the machine learning
link |
and artificial intelligence communities
link |
do not spend much time thinking deeply
link |
about existential risks of AI.
link |
Because our current algorithms are seen as useful but dumb,
link |
it's difficult to imagine how they may become destructive
link |
to the fabric of human civilization
link |
in the foreseeable future.
link |
I understand this mindset, but it's very troublesome.
link |
To me, this is both a dangerous and uninspiring perspective,
link |
reminiscent of a lobster sitting in a pot of lukewarm water
link |
that a minute ago was cold.
link |
I feel a kinship with this lobster.
link |
I believe that already the algorithms
link |
that drive our interaction on social media
link |
have an intelligence and power
link |
that far outstrip the intelligence and power
link |
of any one human being.
link |
Now really is the time to think about this,
link |
to define the trajectory of the interplay
link |
of technology and human beings in our society.
link |
I think that the future of human civilization
link |
very well may be at stake over this very question
link |
of the role of artificial intelligence in our society.
link |
If you enjoy this thing, subscribe on YouTube,
link |
review it on Apple Podcasts, follow on Spotify,
link |
support on Patreon, or connect with me on Twitter
link |
And now, here's my conversation with Max Tegmark.
link |
So people might not know this,
link |
but you were actually episode number one of this podcast
link |
just a couple of years ago, and now we're back.
link |
And it so happens that a lot of exciting things happened
link |
in both physics and artificial intelligence,
link |
both fields that you're super passionate about.
link |
Can we try to catch up to some of the exciting things
link |
happening in artificial intelligence,
link |
especially in the context of the way it's cracking,
link |
open the different problems of the sciences?
link |
Yeah, I'd love to, especially now as we start 2021 here,
link |
it's a really fun time to think about
link |
what were the biggest breakthroughs in AI,
link |
not the ones necessarily that media wrote about,
link |
but that really matter, and what does that mean
link |
for our ability to do better science?
link |
What does it mean for our ability
link |
to help people around the world?
link |
And what does it mean for new problems
link |
that they could cause if we're not smart enough
link |
to avoid them, so what do we learn basically from this?
link |
So one of the amazing things you're a part of
link |
is the AI Institute for Artificial Intelligence
link |
and Fundamental Interactions.
link |
What's up with this institute?
link |
What are you working on?
link |
What are you thinking about?
link |
The idea is something I'm very on fire with,
link |
which is basically AI meets physics.
link |
And it's been almost five years now
link |
since I shifted my own MIT research
link |
from physics to machine learning.
link |
And in the beginning, I noticed that a lot of my colleagues,
link |
even though they were polite about it,
link |
were like kind of, what is Max doing?
link |
What is this weird stuff?
link |
He's lost his mind.
link |
But then gradually, I, together with some colleagues,
link |
were able to persuade more and more of the other professors
link |
in our physics department to get interested in this.
link |
And now we've got this amazing NSF Center,
link |
so 20 million bucks for the next five years, MIT,
link |
and a bunch of neighboring universities here also.
link |
And I noticed now those colleagues
link |
who were looking at me funny have stopped
link |
asking what the point is of this,
link |
because it's becoming more clear.
link |
And I really believe that, of course,
link |
AI can help physics a lot to do better physics.
link |
But physics can also help AI a lot,
link |
both by building better hardware.
link |
My colleague, Marin Soljacic, for example,
link |
is working on an optical chip for much faster machine
link |
learning, where the computation is done
link |
not by moving electrons around, but by moving photons around,
link |
dramatically less energy use, faster, better.
link |
We can also help AI a lot, I think,
link |
by having a different set of tools
link |
and a different, maybe more audacious attitude.
link |
AI has, to a significant extent, been an engineering discipline
link |
where you're just trying to make things that work
link |
and being more interested in maybe selling them
link |
than in figuring out exactly how they work
link |
and proving theorems about that they will always work.
link |
Contrast that with physics.
link |
When Elon Musk sends a rocket to the International Space
link |
Station, they didn't just train with machine learning.
link |
Oh, let's fire it a little bit more to the left,
link |
a bit more to the right.
link |
Oh, that also missed.
link |
No, we figured out Newton's laws of gravitation and other things
link |
and got a really deep fundamental understanding.
link |
And that's what gives us such confidence in rockets.
link |
And my vision is that in the future,
link |
all machine learning systems that actually have impact
link |
on people's lives will be understood
link |
at a really, really deep level.
link |
So we trust them, not because some sales rep told us to,
link |
but because they've earned our trust.
link |
And really safety critical things
link |
even prove that they will always do what we expect them to do.
link |
That's very much the physics mindset.
link |
So it's interesting, if you look at big breakthroughs
link |
that have happened in machine learning this year,
link |
from dancing robots, it's pretty fantastic.
link |
Not just because it's cool, but if you just
link |
think about not that many years ago,
link |
this YouTube video at this DARPA challenge with the MIT robot
link |
comes out of the car and face plants.
link |
How far we've come in just a few years.
link |
Similarly, Alpha Fold 2, crushing the protein folding
link |
We can talk more about implications
link |
for medical research and stuff.
link |
But hey, that's huge progress.
link |
You can look at the GPT3 that can spout off
link |
English text, which sometimes really, really blows you away.
link |
You can look at DeepMind's MuZero,
link |
which doesn't just kick our butt in Go and Chess and Shogi,
link |
but also in all these Atari games.
link |
And you don't even have to teach it the rules now.
link |
What all of those have in common is, besides being powerful,
link |
is we don't fully understand how they work.
link |
And that's fine if it's just some dancing robots.
link |
And the worst thing that can happen is they face plant.
link |
Or if they're playing Go, and the worst thing that can happen
link |
is that they make a bad move and lose the game.
link |
It's less fine if that's what's controlling
link |
your self driving car or your nuclear power plant.
link |
And we've seen already that even though Hollywood
link |
had all these movies where they try
link |
to make us worry about the wrong things,
link |
like machines turning evil, the actual bad things that
link |
have happened with automation have not
link |
been machines turning evil.
link |
They've been caused by overtrust in things
link |
we didn't understand as well as we thought we did.
link |
Even very simple automated systems
link |
like what Boeing put into the 737 MAX killed a lot of people.
link |
Was it that that little simple system was evil?
link |
But we didn't understand it as well as we should have.
link |
And we trusted without understanding.
link |
That's the overtrust.
link |
We didn't even understand that we didn't understand.
link |
The humility is really at the core of being a scientist.
link |
I think step one, if you want to be a scientist,
link |
is don't ever fool yourself into thinking you understand things
link |
when you actually don't.
link |
That's probably good advice for humans in general.
link |
I think humility in general can do us good.
link |
But in science, it's so spectacular.
link |
Why did we have the wrong theory of gravity
link |
ever from Aristotle onward until Galileo's time?
link |
Why would we believe something so dumb as that if I throw
link |
this water bottle, it's going to go up with constant speed
link |
until it realizes that its natural motion is down?
link |
It changes its mind.
link |
Because people just kind of assumed Aristotle was right.
link |
He's an authority.
link |
We understand that.
link |
Why did we believe things like that the sun is
link |
going around the Earth?
link |
Why did we believe that time flows
link |
at the same rate for everyone until Einstein?
link |
Same exact mistake over and over again.
link |
We just weren't humble enough to acknowledge that we actually
link |
didn't know for sure.
link |
We assumed we knew.
link |
So we didn't discover the truth because we
link |
assumed there was nothing there to be discovered, right?
link |
There was something to be discovered about the 737 Max.
link |
And if you had been a bit more suspicious
link |
and tested it better, we would have found it.
link |
And it's the same thing with most harm
link |
that's been done by automation so far, I would say.
link |
So I don't know if you heard here of a company called
link |
That means you didn't invest in them earlier.
link |
They deployed this automated trading system,
link |
all nice and shiny.
link |
They didn't understand it as well as they thought.
link |
And it went about losing $10 million
link |
per minute for 44 minutes straight
link |
until someone presumably was like, oh, no, shut this up.
link |
It was, again, misplaced trust, something they didn't fully
link |
understand, right?
link |
And there have been so many, even when people
link |
have been killed by robots, which is quite rare still,
link |
but in factory accidents, it's in every single case
link |
been not malice, just that the robot didn't understand
link |
that a human is different from an auto part or whatever.
link |
So this is why I think there's so much opportunity
link |
for a physics approach, where you just aim for a higher
link |
level of understanding.
link |
And if you look at all these systems
link |
that we talked about from reinforcement learning
link |
systems and dancing robots to all these neural networks
link |
that power GPT3 and go playing software and stuff,
link |
they're all basically black boxes,
link |
not so different from if you teach a human something,
link |
you have no idea how their brain works, right?
link |
Except the human brain, at least,
link |
has been error corrected during many, many centuries
link |
of evolution in a way that some of these systems have not,
link |
And my MIT research is entirely focused
link |
on demystifying this black box, intelligible intelligence
link |
That's a good line, intelligible intelligence.
link |
Yeah, that we shouldn't settle for something
link |
that seems intelligent, but it should
link |
be intelligible so that we actually trust it
link |
because we understand it, right?
link |
Like, again, Elon trusts his rockets
link |
because he understands Newton's laws and thrust
link |
and how everything works.
link |
And can I tell you why I'm optimistic about this?
link |
I think we've made a bit of a mistake
link |
where some people still think that somehow we're never going
link |
to understand neural networks.
link |
We're just going to have to learn to live with this.
link |
It's this very powerful black box.
link |
Basically, for those who haven't spent time
link |
building their own, it's super simple what happens inside.
link |
You send in a long list of numbers,
link |
and then you do a bunch of operations on them,
link |
multiply by matrices, et cetera, et cetera,
link |
and some other numbers come out that's output of it.
link |
And then there are a bunch of knobs you can tune.
link |
And when you change them, it affects the computation,
link |
the input output relation.
link |
And then you just give the computer
link |
some definition of good, and it keeps optimizing these knobs
link |
until it performs as good as possible.
link |
And often, you go like, wow, that's really good.
link |
This robot can dance, or this machine
link |
is beating me at chess now.
link |
And in the end, you have something
link |
which, even though you can look inside it,
link |
you have very little idea of how it works.
link |
You can print out tables of all the millions of parameters
link |
Is it crystal clear now how it's working?
link |
No, of course not.
link |
Many of my colleagues seem willing to settle for that.
link |
And I'm like, no, that's like the halfway point.
link |
Some have even gone as far as sort of guessing
link |
that the mistrutability of this is
link |
where some of the power comes from,
link |
and some sort of mysticism.
link |
I think that's total nonsense.
link |
I think the real power of neural networks
link |
comes not from inscrutability, but from differentiability.
link |
And what I mean by that is simply
link |
that the output changes only smoothly if you tweak your knobs.
link |
And then you can use all these powerful methods
link |
we have for optimization in science.
link |
We can just tweak them a little bit and see,
link |
did that get better or worse?
link |
That's the fundamental idea of machine learning,
link |
that the machine itself can keep optimizing
link |
until it gets better.
link |
Suppose you wrote this algorithm instead in Python
link |
or some other programming language,
link |
and then what the knobs did was they just changed
link |
random letters in your code.
link |
Now it would just epically fail.
link |
You change one thing, and instead of saying print,
link |
it says, synth, syntax error.
link |
You don't even know, was that for the better
link |
or for the worse, right?
link |
This, to me, is what I believe is
link |
the fundamental power of neural networks.
link |
And just to clarify, the changing
link |
of the different letters in a program
link |
would not be a differentiable process.
link |
It would make it an invalid program, typically.
link |
And then you wouldn't even know if you changed more letters
link |
if it would make it work again, right?
link |
So that's the magic of neural networks, the inscrutability.
link |
The differentiability, that every setting of the parameters
link |
is a program, and you can tell is it better or worse, right?
link |
So you don't like the poetry of the mystery of neural networks
link |
as the source of its power?
link |
I generally like poetry, but.
link |
It's so misleading.
link |
And above all, it shortchanges us.
link |
It makes us underestimate the good things
link |
we can accomplish.
link |
So what we've been doing in my group
link |
is basically step one, train the mysterious neural network
link |
to do something well.
link |
And then step two, do some additional AI techniques
link |
to see if we can now transform this black box into something
link |
equally intelligent that you can actually understand.
link |
So for example, I'll give you one example, this AI Feynman
link |
project that we just published, right?
link |
So we took the 100 most famous or complicated equations
link |
from one of my favorite physics textbooks,
link |
in fact, the one that got me into physics
link |
in the first place, the Feynman lectures on physics.
link |
And so you have a formula.
link |
Maybe it has what goes into the formula
link |
as six different variables, and then what comes out as one.
link |
So then you can make a giant Excel spreadsheet
link |
with seven columns.
link |
You put in just random numbers for the six columns
link |
for those six input variables, and then you
link |
calculate with a formula the seventh column, the output.
link |
So maybe it's like the force equals in the last column
link |
some function of the other.
link |
And now the task is, OK, if I don't tell you
link |
what the formula was, can you figure that out
link |
from looking at my spreadsheet I gave you?
link |
This problem is called symbolic regression.
link |
If I tell you that the formula is
link |
what we call a linear formula, so it's just
link |
that the output is sum of all the things, input, the times,
link |
some constants, that's the famous easy problem
link |
We do it all the time in science and engineering.
link |
But the general one, if it's more complicated functions
link |
with logarithms or cosines or other math,
link |
it's a very, very hard one and probably impossible
link |
to do fast in general, just because the number of formulas
link |
with n symbols just grows exponentially,
link |
just like the number of passwords
link |
you can make grow dramatically with length.
link |
But we had this idea that if you first
link |
have a neural network that can actually approximate
link |
the formula, you just trained it,
link |
even if you don't understand how it works,
link |
that can be the first step towards actually understanding
link |
So that's what we do first.
link |
And then we study that neural network now
link |
and put in all sorts of other data
link |
that wasn't in the original training data
link |
and use that to discover simplifying
link |
properties of the formula.
link |
And that lets us break it apart, often
link |
into many simpler pieces in a kind of divide
link |
and conquer approach.
link |
So we were able to solve all of those 100 formulas,
link |
discover them automatically, plus a whole bunch
link |
And it's actually kind of humbling
link |
to see that this code, which anyone who wants now
link |
is listening to this, can type pip install AI Feynman
link |
on the computer and run it.
link |
It can actually do what Johannes Kepler spent four years doing
link |
when he stared at Mars data until he was like,
link |
finally, Eureka, this is an ellipse.
link |
This will do it automatically for you in one hour.
link |
Or Max Planck, he was looking at how much radiation comes out
link |
from different wavelengths from a hot object
link |
and discovered the famous blackbody formula.
link |
This discovers it automatically.
link |
I'm actually excited about seeing
link |
if we can discover not just old formulas again,
link |
but new formulas that no one has seen before.
link |
I do like this process of using kind of a neural network
link |
to find some basic insights and then dissecting
link |
the neural network to then gain the final.
link |
So in that way, you've forcing the explainability issue,
link |
really trying to analyze the neural network for the things
link |
it knows in order to come up with the final beautiful,
link |
simple theory underlying the initial system
link |
that you were looking at.
link |
And the reason I'm so optimistic that it
link |
can be generalized to so much more
link |
is because that's exactly what we do as human scientists.
link |
Think of Galileo, whom we mentioned, right?
link |
I bet when he was a little kid, if his dad threw him an apple,
link |
he would catch it.
link |
Because he had a neural network in his brain
link |
that he had trained to predict the parabolic orbit of apples
link |
that are thrown under gravity.
link |
If you throw a tennis ball to a dog,
link |
it also has this same ability of deep learning
link |
to figure out how the ball is going to move and catch it.
link |
But Galileo went one step further when he got older.
link |
He went back and was like, wait a minute.
link |
I can write down a formula for this.
link |
Y equals x squared, a parabola.
link |
And he helped revolutionize physics as we know it, right?
link |
So there was a basic neural network
link |
in there from childhood that captured the experiences
link |
of observing different kinds of trajectories.
link |
And then he was able to go back in
link |
with another extra little neural network
link |
and analyze all those experiences and be like,
link |
There's a deeper rule here.
link |
He was able to distill out in symbolic form
link |
what that complicated black box neural network was doing.
link |
Not only did the formula he got ultimately
link |
become more accurate, and similarly, this
link |
is how Newton got Newton's laws, which
link |
is why Elon can send rockets to the space station now, right?
link |
So it's not only more accurate, but it's also simpler,
link |
And it's so simple that we can actually describe it
link |
to our friends and each other, right?
link |
We've talked about it just in the context of physics now.
link |
But hey, isn't this what we're doing when we're
link |
talking to each other also?
link |
We go around with our neural networks,
link |
just like dogs and cats and chipmunks and Blue Jays.
link |
And we experience things in the world.
link |
But then we humans do this additional step
link |
on top of that, where we then distill out
link |
certain high level knowledge that we've extracted from this
link |
in a way that we can communicate it
link |
to each other in a symbolic form in English in this case, right?
link |
So if we can do it and we believe
link |
that we are information processing entities,
link |
then we should be able to make machine learning that
link |
Well, do you think the entire thing could be learning?
link |
Because this dissection process, like for AI Feynman,
link |
the secondary stage feels like something like reasoning.
link |
And the initial step feels more like the more basic kind
link |
of differentiable learning.
link |
Do you think the whole thing could be differentiable
link |
Do you think the whole thing could be basically neural
link |
networks on top of each other?
link |
It's like turtles all the way down.
link |
Could it be neural networks all the way down?
link |
I mean, that's a really interesting question.
link |
We know that in your case, it is neural networks all the way
link |
down because that's all you have in your skull
link |
is a bunch of neurons doing their thing, right?
link |
But if you ask the question more generally,
link |
what algorithms are being used in your brain,
link |
I think it's super interesting to compare.
link |
I think we've gone a little bit backwards historically
link |
because we humans first discovered good old fashioned
link |
AI, the logic based AI that we often call GoFi
link |
for good old fashioned AI.
link |
And then more recently, we did machine learning
link |
because it required bigger computers.
link |
So we had to discover it later.
link |
So we think of machine learning with neural networks
link |
as the modern thing and the logic based AI
link |
as the old fashioned thing.
link |
But if you look at evolution on Earth,
link |
it's actually been the other way around.
link |
I would say that, for example, an eagle
link |
has a better vision system than I have using.
link |
And dogs are just as good at casting tennis balls as I am.
link |
All this stuff which is done by training in neural network
link |
and not interpreting it in words is
link |
something so many of our animal friends can do,
link |
at least as well as us, right?
link |
What is it that we humans can do that the chipmunks
link |
and the eagles cannot?
link |
It's more to do with this logic based stuff, right,
link |
where we can extract out information
link |
in symbols, in language, and now even with equations
link |
if you're a scientist, right?
link |
So basically what happened was first we
link |
built these computers that could multiply numbers real fast
link |
and manipulate symbols.
link |
And we felt they were pretty dumb.
link |
And then we made neural networks that
link |
can see as well as a cat can and do
link |
a lot of this inscrutable black box neural networks.
link |
What we humans can do also is put the two together
link |
Yes, in our own brain.
link |
Yes, in our own brain.
link |
So if we ever want to get artificial general intelligence
link |
that can do all jobs as well as humans can, right,
link |
then that's what's going to be required
link |
to be able to combine the neural networks with symbolic,
link |
combine the old AI with the new AI in a good way.
link |
We do it in our brains.
link |
And there seems to be basically two strategies
link |
I see in industry now.
link |
One scares the heebie jeebies out of me,
link |
and the other one I find much more encouraging.
link |
Can we break them apart?
link |
The one that scares the heebie jeebies out of me
link |
is this attitude that we're just going
link |
to make ever bigger systems that we still
link |
don't understand until they can be as smart as humans.
link |
What could possibly go wrong?
link |
I think it's just such a reckless thing to do.
link |
And unfortunately, if we actually
link |
succeed as a species to build artificial general intelligence,
link |
then we still have no clue how it works.
link |
I think at least 50% chance we're
link |
going to be extinct before too long.
link |
It's just going to be an utter epic own goal.
link |
So it's that 44 minute losing money problem or the paper clip
link |
problem where we don't understand how it works,
link |
and it just in a matter of seconds
link |
runs away in some kind of direction
link |
that's going to be very problematic.
link |
Even long before, you have to worry about the machines
link |
themselves somehow deciding to do things.
link |
And to us, we have to worry about people using machines
link |
that are short of AGI and power to do bad things.
link |
I mean, just take a moment.
link |
And if anyone is not worried particularly about advanced AI,
link |
just take 10 seconds and just think
link |
about your least favorite leader on the planet right now.
link |
Don't tell me who it is.
link |
I want to keep this apolitical.
link |
But just see the face in front of you,
link |
that person, for 10 seconds.
link |
Now imagine that that person has this incredibly powerful AI
link |
under their control and can use it
link |
to impose their will on the whole planet.
link |
How does that make you feel?
link |
So can we break that apart just briefly?
link |
For the 50% chance that we'll run
link |
to trouble with this approach, do you
link |
see the bigger worry in that leader or humans
link |
using the system to do damage?
link |
Or are you more worried, and I think I'm in this camp,
link |
more worried about accidental, unintentional destruction
link |
So humans trying to do good, and in a way
link |
where everyone agrees it's kind of good,
link |
it's just they're trying to do good without understanding.
link |
Because I think every evil leader in history
link |
thought they're, to some degree, thought
link |
they're trying to do good.
link |
I'm sure Hitler thought he was doing good.
link |
I've been reading a lot about Stalin.
link |
I'm sure Stalin is from, he legitimately
link |
thought that communism was good for the world,
link |
and that he was doing good.
link |
I think Mao Zedong thought what he was doing with the Great
link |
Leap Forward was good too.
link |
I'm actually concerned about both of those.
link |
Before, I promised to answer this in detail,
link |
but before we do that, let me finish
link |
answering the first question.
link |
Because I told you that there were two different routes we
link |
could get to artificial general intelligence,
link |
and one scares the hell out of me,
link |
which is this one where we build something,
link |
we just say bigger neural networks, ever more hardware,
link |
and just train the heck out of more data,
link |
and poof, now it's very powerful.
link |
That, I think, is the most unsafe and reckless approach.
link |
The alternative to that is the intelligible intelligence
link |
approach instead, where we say neural networks is just
link |
a tool for the first step to get the intuition,
link |
but then we're going to spend also
link |
serious resources on other AI techniques
link |
for demystifying this black box and figuring out
link |
what it's actually doing so we can convert it
link |
into something that's equally intelligent,
link |
but that we actually understand what it's doing.
link |
Maybe we can even prove theorems about it,
link |
that this car here will never be hacked when it's driving,
link |
because here is the proof.
link |
There is a whole science of this.
link |
It doesn't work for neural networks
link |
that are big black boxes, but it works well
link |
and works with certain other kinds of codes, right?
link |
That approach, I think, is much more promising.
link |
That's exactly why I'm working on it, frankly,
link |
not just because I think it's cool for science,
link |
but because I think the more we understand these systems,
link |
the better the chances that we can
link |
make them do the things that are good for us
link |
that are actually intended, not unintended.
link |
So you think it's possible to prove things
link |
about something as complicated as a neural network?
link |
Well, ideally, there's no reason it
link |
has to be a neural network in the end either, right?
link |
We discovered Newton's laws of gravity
link |
with neural network in Newton's head.
link |
But that's not the way it's programmed into the navigation
link |
system of Elon Musk's rocket anymore.
link |
It's written in C++, or I don't know
link |
what language he uses exactly.
link |
And then there are software tools called symbolic
link |
DARPA and the US military has done a lot of really great
link |
research on this, because they really
link |
want to understand that when they build weapon systems,
link |
they don't just go fire at random or malfunction, right?
link |
And there is even a whole operating system
link |
called Cell 3 that's been developed by a DARPA grant,
link |
where you can actually mathematically prove
link |
that this thing can never be hacked.
link |
One day, I hope that will be something
link |
you can say about the OS that's running on our laptops too.
link |
As you know, we're not there.
link |
But I think we should be ambitious, frankly.
link |
And if we can use machine learning
link |
to help do the proofs and so on as well,
link |
then it's much easier to verify that a proof is correct
link |
than to come up with a proof in the first place.
link |
That's really the core idea here.
link |
If someone comes on your podcast and says
link |
they proved the Riemann hypothesis
link |
or some sensational new theorem, it's
link |
much easier for someone else, take some smart grad,
link |
math grad students to check, oh, there's an error here
link |
on equation five, or this really checks out,
link |
than it was to discover the proof.
link |
Yeah, although some of those proofs are pretty complicated.
link |
But yes, it's still nevertheless much easier
link |
to verify the proof.
link |
I love the optimism.
link |
We kind of, even with the security of systems,
link |
there's a kind of cynicism that pervades people
link |
who think about this, which is like, oh, it's hopeless.
link |
I mean, in the same sense, exactly like you're saying
link |
when you own networks, oh, it's hopeless to understand
link |
With security, people are just like, well,
link |
it's always going, there's always going to be
link |
attack vectors, like ways to attack the system.
link |
But you're right, we're just very new
link |
with these computational systems.
link |
We're new with these intelligent systems.
link |
And it's not out of the realm of possibly,
link |
just like people that understand the movement
link |
of the stars and the planets and so on.
link |
It's entirely possible that within, hopefully soon,
link |
but it could be within 100 years,
link |
we start to have an obvious laws of gravity
link |
about intelligence and God forbid about consciousness too.
link |
I think, of course, if you're selling computers
link |
that get hacked a lot, that's in your interest
link |
as a company that people think it's impossible
link |
to make it safe, but he's going to get the idea
link |
I want to really inject optimism here.
link |
It's absolutely possible to do much better
link |
than we're doing now.
link |
And your laptop does so much stuff.
link |
You don't need the music player to be super safe
link |
in your future self driving car, right?
link |
If someone hacks it and starts playing music
link |
you don't like, the world won't end.
link |
But what you can do is you can break out
link |
and say that your drive computer that controls your safety
link |
must be completely physically decoupled entirely
link |
from the entertainment system.
link |
And it must physically be such that it can't take on
link |
over the air updates while you're driving.
link |
And it can have ultimately some operating system on it
link |
which is symbolically verified and proven
link |
that it's always going to do what it's supposed to do, right?
link |
We can basically have, and companies should take
link |
that attitude too.
link |
They should look at everything they do and say
link |
what are the few systems in our company
link |
that threaten the whole life of the company
link |
if they get hacked and have the highest standards for them.
link |
And then they can save money by going for the el cheapo
link |
poorly understood stuff for the rest.
link |
This is very feasible, I think.
link |
And coming back to the bigger question
link |
that you worried about that there'll be unintentional
link |
failures, I think there are two quite separate risks here.
link |
We talked a lot about one of them
link |
which is that the goals are noble of the human.
link |
The human says, I want this airplane to not crash
link |
because this is not Muhammad Atta
link |
now flying the airplane, right?
link |
And now there's this technical challenge
link |
of making sure that the autopilot is actually
link |
gonna behave as the pilot wants.
link |
If you set that aside, there's also the separate question.
link |
How do you make sure that the goals of the pilot
link |
are actually aligned with the goals of the passenger?
link |
How do you make sure very much more broadly
link |
that if we can all agree as a species
link |
that we would like things to kind of go well
link |
for humanity as a whole, that the goals are aligned here.
link |
The alignment problem.
link |
And yeah, there's been a lot of progress
link |
in the sense that there's suddenly huge amounts
link |
of research going on on it about it.
link |
I'm very grateful to Elon Musk
link |
for giving us that money five years ago
link |
so we could launch the first research program
link |
on technical AI safety and alignment.
link |
There's a lot of stuff happening.
link |
But I think we need to do more than just make sure
link |
little machines do always what their owners do.
link |
That wouldn't have prevented September 11th
link |
if Muhammad Atta said, okay, autopilot,
link |
please fly into World Trade Center.
link |
And it's like, okay.
link |
That even happened in a different situation.
link |
There was this depressed pilot named Andreas Lubitz, right?
link |
Who told his German wings passenger jet
link |
to fly into the Alps.
link |
He just told the computer to change the altitude
link |
to a hundred meters or something like that.
link |
And you know what the computer said?
link |
And it had the freaking topographical map of the Alps
link |
in there, it had GPS, everything.
link |
No one had bothered teaching it
link |
even the basic kindergarten ethics of like,
link |
no, we never want airplanes to fly into mountains
link |
under any circumstances.
link |
And so we have to think beyond just the technical issues
link |
and think about how do we align in general incentives
link |
on this planet for the greater good?
link |
So starting with simple stuff like that,
link |
every airplane that has a computer in it
link |
should be taught whatever kindergarten ethics
link |
that's smart enough to understand.
link |
Like, no, don't fly into fixed objects
link |
if the pilot tells you to do so.
link |
Then go on autopilot mode.
link |
Send an email to the cops and land at the latest airport,
link |
nearest airport, you know.
link |
Any car with a forward facing camera
link |
should just be programmed by the manufacturer
link |
so that it will never accelerate into a human ever.
link |
That would avoid things like the NIS attack
link |
and many horrible terrorist vehicle attacks
link |
where they deliberately did that, right?
link |
This was not some sort of thing,
link |
oh, you know, US and China, different views on,
link |
no, there was not a single car manufacturer
link |
in the world, right, who wanted the cars to do this.
link |
They just hadn't thought to do the alignment.
link |
And if you look at more broadly problems
link |
that happen on this planet,
link |
the vast majority have to do a poor alignment.
link |
I mean, think about, let's go back really big
link |
because I know you're so good at that.
link |
Let's go big, yeah.
link |
Yeah, so long ago in evolution, we had these genes.
link |
And they wanted to make copies of themselves.
link |
That's really all they cared about.
link |
So some genes said, hey, I'm gonna build a brain
link |
on this body I'm in so that I can get better
link |
at making copies of myself.
link |
And then they decided for their benefit
link |
to get copied more, to align your brain's incentives
link |
with their incentives.
link |
So it didn't want you to starve to death.
link |
So it gave you an incentive to eat
link |
and it wanted you to make copies of the genes.
link |
So it gave you incentive to fall in love
link |
and do all sorts of naughty things
link |
to make copies of itself, right?
link |
So that was successful value alignment done on the genes.
link |
They created something more intelligent than themselves,
link |
but they made sure to try to align the values.
link |
But then something went a little bit wrong
link |
against the idea of what the genes wanted
link |
because a lot of humans discovered,
link |
hey, you know, yeah, we really like this business
link |
about sex that the genes have made us enjoy,
link |
but we don't wanna have babies right now.
link |
So we're gonna hack the genes and use birth control.
link |
And I really feel like drinking a Coca Cola right now,
link |
but I don't wanna get a potbelly,
link |
so I'm gonna drink Diet Coke.
link |
We have all these things we've figured out
link |
because we're smarter than the genes,
link |
how we can actually subvert their intentions.
link |
So it's not surprising that we humans now,
link |
when we are in the role of these genes,
link |
creating other nonhuman entities with a lot of power,
link |
have to face the same exact challenge.
link |
How do we make other powerful entities
link |
have incentives that are aligned with ours?
link |
And so they won't hack them.
link |
Corporations, for example, right?
link |
We humans decided to create corporations
link |
because it can benefit us greatly.
link |
Now all of a sudden there's a supermarket.
link |
I can go buy food there.
link |
I don't have to hunt.
link |
Awesome, and then to make sure that this corporation
link |
would do things that were good for us and not bad for us,
link |
we created institutions to keep them in check.
link |
Like if the local supermarket sells poisonous food,
link |
then the owners of the supermarket
link |
have to spend some years reflecting behind bars, right?
link |
So we created incentives to align them.
link |
But of course, just like we were able to see
link |
through this thing and you develop birth control,
link |
if you're a powerful corporation,
link |
you also have an incentive to try to hack the institutions
link |
that are supposed to govern you.
link |
Because you ultimately, as a corporation,
link |
have an incentive to maximize your profit.
link |
Just like you have an incentive
link |
to maximize the enjoyment your brain has,
link |
not for your genes.
link |
So if they can figure out a way of bribing regulators,
link |
then they're gonna do that.
link |
In the US, we kind of caught onto that
link |
and made laws against corruption and bribery.
link |
Then in the late 1800s, Teddy Roosevelt realized that,
link |
no, we were still being kind of hacked
link |
because the Massachusetts Railroad companies
link |
had like a bigger budget than the state of Massachusetts
link |
and they were doing a lot of very corrupt stuff.
link |
So he did the whole trust busting thing
link |
to try to align these other nonhuman entities,
link |
the companies, again,
link |
more with the incentives of Americans as a whole.
link |
It's not surprising, though,
link |
that this is a battle you have to keep fighting.
link |
Now we have even larger companies than we ever had before.
link |
And of course, they're gonna try to, again,
link |
subvert the institutions.
link |
Not because, I think people make a mistake
link |
of getting all too,
link |
thinking about things in terms of good and evil.
link |
Like arguing about whether corporations are good or evil,
link |
or whether robots are good or evil.
link |
A robot isn't good or evil, it's a tool.
link |
And you can use it for great things
link |
like robotic surgery or for bad things.
link |
And a corporation also is a tool, of course.
link |
And if you have good incentives to the corporation,
link |
it'll do great things,
link |
like start a hospital or a grocery store.
link |
If you have any bad incentives,
link |
then it's gonna start maybe marketing addictive drugs
link |
to people and you'll have an opioid epidemic, right?
link |
we should not make the mistake of getting into
link |
some sort of fairytale, good, evil thing
link |
about corporations or robots.
link |
We should focus on putting the right incentives in place.
link |
My optimistic vision is that if we can do that,
link |
then we can really get good things.
link |
We're not doing so great with that right now,
link |
either on AI, I think,
link |
or on other intelligent nonhuman entities,
link |
like big companies, right?
link |
We just have a new second generation of AI
link |
and a secretary of defense who's gonna start up now
link |
in the Biden administration,
link |
who was an active member of the board of Raytheon,
link |
So, I have nothing against Raytheon.
link |
I'm not a pacifist,
link |
but there's an obvious conflict of interest
link |
if someone is in the job where they decide
link |
who they're gonna contract with.
link |
And I think somehow we have,
link |
maybe we need another Teddy Roosevelt to come along again
link |
and say, hey, you know,
link |
we want what's good for all Americans,
link |
and we need to go do some serious realigning again
link |
of the incentives that we're giving to these big companies.
link |
And then we're gonna be better off.
link |
It seems that naturally with human beings,
link |
just like you beautifully described the history
link |
of this whole thing,
link |
of it all started with the genes
link |
and they're probably pretty upset
link |
by all the unintended consequences that happened since.
link |
But it seems that it kind of works out,
link |
like it's in this collective intelligence
link |
that emerges at the different levels.
link |
It seems to find sometimes last minute
link |
a way to realign the values or keep the values aligned.
link |
It's almost, it finds a way,
link |
like different leaders, different humans pop up
link |
all over the place that reset the system.
link |
Do you want, I mean, do you have an explanation why that is?
link |
Or is that just survivor bias?
link |
And also is that different,
link |
somehow fundamentally different than with AI systems
link |
where you're no longer dealing with something
link |
that was a direct, maybe companies are the same,
link |
a direct byproduct of the evolutionary process?
link |
I think there is one thing which has changed.
link |
That's why I'm not all optimistic.
link |
That's why I think there's about a 50% chance
link |
if we take the dumb route with artificial intelligence
link |
that humanity will be extinct in this century.
link |
First, just the big picture.
link |
Yeah, companies need to have the right incentives.
link |
Even governments, right?
link |
We used to have governments,
link |
usually there were just some king,
link |
who was the king because his dad was the king.
link |
And then there were some benefits
link |
of having this powerful kingdom or empire of any sort
link |
because then it could prevent a lot of local squabbles.
link |
So at least everybody in that region
link |
would stop warring against each other.
link |
And their incentives of different cities in the kingdom
link |
became more aligned, right?
link |
That was the whole selling point.
link |
Harare, Noel Harare has a beautiful piece
link |
on how empires were collaboration enablers.
link |
And then we also, Harare says,
link |
invented money for that reason
link |
so we could have better alignment
link |
and we could do trade even with people we didn't know.
link |
So this sort of stuff has been playing out
link |
since time immemorial, right?
link |
What's changed is that it happens on ever larger scales,
link |
The technology keeps getting better
link |
because science gets better.
link |
So now we can communicate over larger distances,
link |
transport things fast over larger distances.
link |
And so the entities get ever bigger,
link |
but our planet is not getting bigger anymore.
link |
So in the past, you could have one experiment
link |
that just totally screwed up like Easter Island,
link |
where they actually managed to have such poor alignment
link |
that when they went extinct, people there,
link |
there was no one else to come back and replace them, right?
link |
If Elon Musk doesn't get us to Mars
link |
and then we go extinct on a global scale,
link |
then we're not coming back.
link |
That's the fundamental difference.
link |
And that's a mistake we don't make for that reason.
link |
In the past, of course, history is full of fiascos, right?
link |
But it was never the whole planet.
link |
And then, okay, now there's this nice uninhabited land here.
link |
Some other people could move in and organize things better.
link |
This is different.
link |
The second thing, which is also different
link |
is that technology gives us so much more empowerment, right?
link |
Both to do good things and also to screw up.
link |
In the stone age, even if you had someone
link |
whose goals were really poorly aligned,
link |
like maybe he was really pissed off
link |
because his stone age girlfriend dumped him
link |
and he just wanted to,
link |
if he wanted to kill as many people as he could,
link |
how many could he really take out with a rock and a stick
link |
before he was overpowered, right?
link |
Just handful, right?
link |
Now, with today's technology,
link |
if we have an accidental nuclear war
link |
between Russia and the US,
link |
which we almost have about a dozen times,
link |
and then we have a nuclear winter,
link |
it could take out seven billion people
link |
or six billion people, we don't know.
link |
So the scale of the damage is bigger that we can do.
link |
And there's obviously no law of physics
link |
that says that technology will never get powerful enough
link |
that we could wipe out our species entirely.
link |
That would just be fantasy to think
link |
that science is somehow doomed
link |
to not get more powerful than that, right?
link |
And it's not at all unfeasible in our lifetime
link |
that someone could design a designer pandemic
link |
which spreads as easily as COVID,
link |
but just basically kills everybody.
link |
We already had smallpox.
link |
It killed one third of everybody who got it.
link |
What do you think of the, here's an intuition,
link |
maybe it's completely naive
link |
and this optimistic intuition I have,
link |
which it seems, and maybe it's a biased experience
link |
that I have, but it seems like the most brilliant people
link |
I've met in my life all are really like
link |
fundamentally good human beings.
link |
And not like naive good, like they really wanna do good
link |
for the world in a way that, well, maybe is aligned
link |
to my sense of what good means.
link |
And so I have a sense that the people
link |
that will be defining the very cutting edge of technology,
link |
there'll be much more of the ones that are doing good
link |
versus the ones that are doing evil.
link |
So the race, I'm optimistic on the,
link |
us always like last minute coming up with a solution.
link |
So if there's an engineered pandemic
link |
that has the capability to destroy
link |
most of the human civilization,
link |
it feels like to me either leading up to that before
link |
or as it's going on, there will be,
link |
we're able to rally the collective genius
link |
of the human species.
link |
I can tell by your smile that you're
link |
at least some percentage doubtful,
link |
but could that be a fundamental law of human nature?
link |
That evolution only creates, like karma is beneficial,
link |
good is beneficial, and therefore we'll be all right.
link |
I hope you're right.
link |
I would really love it if you're right,
link |
if there's some sort of law of nature that says
link |
that we always get lucky in the last second
link |
with karma, but I prefer not playing it so close
link |
and gambling on that.
link |
And I think, in fact, I think it can be dangerous
link |
to have too strong faith in that
link |
because it makes us complacent.
link |
Like if someone tells you, you never have to worry
link |
about your house burning down,
link |
then you're not gonna put in a smoke detector
link |
because why would you need to?
link |
Even if it's sometimes very simple precautions,
link |
we don't take them.
link |
If you're like, oh, the government is gonna take care
link |
of everything for us, I can always trust my politicians.
link |
I can always, we advocate our own responsibility.
link |
I think it's a healthier attitude to say,
link |
yeah, maybe things will work out.
link |
Maybe I'm actually gonna have to myself step up
link |
and take responsibility.
link |
And the stakes are so huge.
link |
I mean, if we do this right, we can develop
link |
all this ever more powerful technology
link |
and cure all diseases and create a future
link |
where humanity is healthy and wealthy
link |
for not just the next election cycle,
link |
but like billions of years throughout our universe.
link |
That's really worth working hard for
link |
and not just sitting and hoping
link |
for some sort of fairytale karma.
link |
Well, I just mean, so you're absolutely right.
link |
From the perspective of the individual,
link |
like for me, the primary thing should be
link |
to take responsibility and to build the solutions
link |
that your skillset allows.
link |
Yeah, which is a lot.
link |
I think we underestimate often very much
link |
how much good we can do.
link |
If you or anyone listening to this
link |
is completely confident that our government
link |
would do a perfect job on handling any future crisis
link |
with engineered pandemics or future AI,
link |
I actually reflect a bit on what actually happened in 2020.
link |
Do you feel that the government by and large
link |
around the world has handled this flawlessly?
link |
That's a really sad and disappointing reality
link |
that hopefully is a wake up call for everybody.
link |
For the scientists, for the engineers,
link |
for the researchers in AI especially,
link |
it was disappointing to see how inefficient we were
link |
at collecting the right amount of data
link |
in a privacy preserving way and spreading that data
link |
and utilizing that data to make decisions,
link |
all that kind of stuff.
link |
Yeah, I think when something bad happens to me,
link |
I made myself a promise many years ago
link |
that I would not be a whiner.
link |
So when something bad happens to me,
link |
of course it's a process of disappointment,
link |
but then I try to focus on what did I learn from this
link |
that can make me a better person in the future.
link |
And there's usually something to be learned when I fail.
link |
And I think we should all ask ourselves,
link |
what can we learn from the pandemic
link |
about how we can do better in the future?
link |
And you mentioned there a really good lesson.
link |
We were not as resilient as we thought we were
link |
and we were not as prepared maybe as we wish we were.
link |
You can even see very stark contrast around the planet.
link |
South Korea, they have over 50 million people.
link |
Do you know how many deaths they have from COVID
link |
last time I checked?
link |
Well, the short answer is that they had prepared.
link |
They were incredibly quick,
link |
incredibly quick to get on it
link |
with very rapid testing and contact tracing and so on,
link |
which is why they never had more cases
link |
than they could contract trace effectively, right?
link |
They never even had to have the kind of big lockdowns
link |
we had in the West.
link |
But the deeper answer to,
link |
it's not just the Koreans are just somehow better people.
link |
The reason I think they were better prepared
link |
was because they had already had a pretty bad hit
link |
from the SARS pandemic,
link |
or which never became a pandemic,
link |
something like 17 years ago, I think.
link |
So it was kind of fresh memory
link |
that we need to be prepared for pandemics.
link |
So they were, right?
link |
So maybe this is a lesson here
link |
for all of us to draw from COVID
link |
that rather than just wait for the next pandemic
link |
or the next problem with AI getting out of control
link |
maybe we should just actually set aside
link |
a tiny fraction of our GDP
link |
to have people very systematically
link |
do some horizon scanning and say,
link |
okay, what are the things that could go wrong?
link |
And let's duke it out and see
link |
which are the more likely ones
link |
and which are the ones that are actually actionable
link |
and then be prepared.
link |
So one of the observations as one little ant slash human
link |
that I am of disappointment
link |
is the political division over information
link |
that has been observed, that I observed this year,
link |
that it seemed the discussion was less about
link |
sort of what happened and understanding
link |
what happened deeply and more about
link |
there's different truths out there.
link |
And it's like an argument,
link |
my truth is better than your truth.
link |
And it's like red versus blue or different.
link |
It was like this ridiculous discourse
link |
that doesn't seem to get at any kind of notion of the truth.
link |
It's not like some kind of scientific process.
link |
Even science got politicized in ways
link |
that's very heartbreaking to me.
link |
You have an exciting project on the AI front
link |
of trying to rethink one of the,
link |
you mentioned corporations.
link |
There's one of the other collective intelligence systems
link |
that have emerged through all of this is social networks.
link |
And just the spread, the internet is the spread
link |
of information on the internet,
link |
our ability to share that information.
link |
There's all different kinds of news sources and so on.
link |
And so you said like that's from first principles,
link |
let's rethink how we think about the news,
link |
how we think about information.
link |
Can you talk about this amazing effort
link |
that you're undertaking?
link |
This has been my big COVID project
link |
and nights and weekends on ever since the lockdown.
link |
To segue into this actually,
link |
let me come back to what you said earlier
link |
that you had this hope that in your experience,
link |
people who you felt were very talented
link |
were often idealistic and wanted to do good.
link |
Frankly, I feel the same about all people by and large,
link |
there are always exceptions,
link |
but I think the vast majority of everybody,
link |
regardless of education and whatnot,
link |
really are fundamentally good, right?
link |
So how can it be that people still do so much nasty stuff?
link |
I think it has everything to do with this,
link |
with the information that we're given.
link |
If you go into Sweden 500 years ago
link |
and you start telling all the farmers
link |
that those Danes in Denmark,
link |
they're so terrible people, and we have to invade them
link |
because they've done all these terrible things
link |
that you can't fact check yourself.
link |
A lot of people, Swedes did that, right?
link |
And we're seeing so much of this today in the world,
link |
both geopolitically, where we are told that China is bad
link |
and Russia is bad and Venezuela is bad,
link |
and people in those countries are often told
link |
And we also see it at a micro level where people are told
link |
that, oh, those who voted for the other party are bad people.
link |
It's not just an intellectual disagreement,
link |
but they're bad people and we're getting ever more divided.
link |
So how do you reconcile this with this intrinsic goodness
link |
I think it's pretty obvious that it has, again,
link |
to do with the information that we're fed and given, right?
link |
We evolved to live in small groups
link |
where you might know 30 people in total, right?
link |
So you then had a system that was quite good
link |
for assessing who you could trust and who you could not.
link |
And if someone told you that Joe there is a jerk,
link |
but you had interacted with him yourself
link |
and seen him in action,
link |
and you would quickly realize maybe
link |
that that's actually not quite accurate, right?
link |
But now that the most people on the planet
link |
are people we've never met,
link |
it's very important that we have a way
link |
of trusting the information we're given.
link |
And so, okay, so where does the news project come in?
link |
Well, throughout history, you can go read Machiavelli,
link |
from the 1400s, and you'll see how already then
link |
they were busy manipulating people
link |
with propaganda and stuff.
link |
Propaganda is not new at all.
link |
And the incentives to manipulate people
link |
is just not new at all.
link |
What is it that's new?
link |
What's new is machine learning meets propaganda.
link |
That's what's new.
link |
That's why this has gotten so much worse.
link |
Some people like to blame certain individuals,
link |
like in my liberal university bubble,
link |
many people blame Donald Trump and say it was his fault.
link |
I see it differently.
link |
I think Donald Trump just had this extreme skill
link |
at playing this game in the machine learning algorithm age.
link |
A game he couldn't have played 10 years ago.
link |
So what's changed?
link |
What's changed is, well, Facebook and Google
link |
and other companies, and I'm not badmouthing them,
link |
I have a lot of friends who work for these companies,
link |
good people, they deployed machine learning algorithms
link |
just to increase their profit a little bit,
link |
to just maximize the time people spent watching ads.
link |
And they had totally underestimated
link |
how effective they were gonna be.
link |
This was, again, the black box, non intelligible intelligence.
link |
They just noticed, oh, we're getting more ad revenue.
link |
It took a long time until they even realized why and how
link |
and how damaging this was for society.
link |
Because of course, what the machine learning figured out
link |
was that the by far most effective way of gluing you
link |
to your little rectangle was to show you things
link |
that triggered strong emotions, anger, et cetera, resentment,
link |
and if it was true or not, it didn't really matter.
link |
It was also easier to find stories that weren't true.
link |
If you weren't limited, that's just the limitation,
link |
is to show people.
link |
That's a very limiting fact.
link |
And before long, we got these amazing filter bubbles
link |
on a scale we had never seen before.
link |
A couple of days to the fact that also the online news media
link |
were so effective that they killed a lot of people
link |
that were so effective that they killed a lot of print
link |
There's less than half as many journalists
link |
now in America, I believe, as there was a generation ago.
link |
You just couldn't compete with the online advertising.
link |
So all of a sudden, most people are not
link |
getting even reading newspapers.
link |
They get their news from social media.
link |
And most people only get news in their little bubble.
link |
So along comes now some people like Donald Trump,
link |
who figured out, among the first successful politicians,
link |
to figure out how to really play this new game
link |
and become very, very influential.
link |
But I think Donald Trump was as simple.
link |
He took advantage of it.
link |
He didn't create the fundamental conditions
link |
were created by machine learning taking over the news media.
link |
So this is what motivated my little COVID project here.
link |
So I said before, machine learning and tech in general
link |
is not evil, but it's also not good.
link |
It's just a tool that you can use
link |
for good things or bad things.
link |
And as it happens, machine learning and news
link |
was mainly used by the big players, big tech,
link |
to manipulate people and to watch as many ads as possible,
link |
which had this unintended consequence of really screwing
link |
up our democracy and fragmenting it into filter bubbles.
link |
So I thought, well, machine learning algorithms
link |
are basically free.
link |
They can run on your smartphone for free also
link |
if someone gives them away to you, right?
link |
There's no reason why they only have to help the big guy
link |
to manipulate the little guy.
link |
They can just as well help the little guy
link |
to see through all the manipulation attempts
link |
So this project is called,
link |
you can go to improvethenews.org.
link |
The first thing we've built is this little news aggregator.
link |
Looks a bit like Google News,
link |
except it has these sliders on it to help you break out
link |
of your filter bubble.
link |
So if you're reading, you can click, click
link |
and go to your favorite topic.
link |
And then if you just slide the left, right slider
link |
away all the way over to the left.
link |
There's two sliders, right?
link |
Yeah, there's the one, the most obvious one
link |
is the one that has left, right labeled on it.
link |
You go to the left, you get one set of articles,
link |
you go to the right, you see a very different truth
link |
Oh, that's literally left and right on the political spectrum.
link |
On the political spectrum.
link |
So if you're reading about immigration, for example,
link |
it's very, very noticeable.
link |
And I think step one always,
link |
if you wanna not get manipulated is just to be able
link |
to recognize the techniques people use.
link |
So it's very helpful to just see how they spin things
link |
I think many people are under the misconception
link |
that the main problem is fake news.
link |
I had an amazing team of MIT students
link |
where we did an academic project to use machine learning
link |
to detect the main kinds of bias over the summer.
link |
And yes, of course, sometimes there's fake news
link |
where someone just claims something that's false, right?
link |
Like, oh, Hillary Clinton just got divorced or something.
link |
But what we see much more of is actually just omissions.
link |
If you go to, there's some stories which just won't be
link |
mentioned by the left or the right, because it doesn't suit
link |
And then they'll mention other ones very, very, very much.
link |
So for example, we've had a number of stories
link |
about the Trump family's financial dealings.
link |
And then there's been a bunch of stories
link |
about the Biden family's, Hunter Biden's financial dealings.
link |
Surprise, surprise, they don't get equal coverage
link |
on the left and the right.
link |
One side loves to cover the Biden, Hunter Biden's stuff,
link |
and one side loves to cover the Trump.
link |
You can never guess which is which, right?
link |
But the great news is if you're a normal American citizen
link |
and you dislike corruption in all its forms,
link |
then slide, slide, you can just look at both sides
link |
and you'll see all those political corruption stories.
link |
It's really liberating to just take in the both sides,
link |
the spin on both sides.
link |
It somehow unlocks your mind to think on your own,
link |
to realize that, I don't know, it's the same thing
link |
that was useful, right, in the Soviet Union times
link |
for when everybody was much more aware
link |
that they're surrounded by propaganda, right?
link |
That is so interesting what you're saying, actually.
link |
So Noam Chomsky, used to be our MIT colleague,
link |
once said that propaganda is to democracy
link |
what violence is to totalitarianism.
link |
And what he means by that is if you have
link |
a really totalitarian government,
link |
you don't need propaganda.
link |
People will do what you want them to do anyway,
link |
but out of fear, right?
link |
But otherwise, you need propaganda.
link |
So I would say actually that the propaganda
link |
is much higher quality in democracies,
link |
much more believable.
link |
And it's really, it's really striking.
link |
When I talk to colleagues, science colleagues
link |
like from Russia and China and so on,
link |
I notice they are actually much more aware
link |
of the propaganda in their own media
link |
than many of my American colleagues are
link |
about the propaganda in Western media.
link |
That means the propaganda in the Western media
link |
That's so brilliant.
link |
Everything's better in the West, even the propaganda.
link |
But once you realize that,
link |
you realize there's also something very optimistic there
link |
that you can do about it, right?
link |
Because first of all, omissions,
link |
as long as there's no outright censorship,
link |
you can just look at both sides
link |
and pretty quickly piece together
link |
a much more accurate idea of what's actually going on, right?
link |
And develop a natural skepticism too.
link |
Just an analytical scientific mind
link |
about the way you're taking the information.
link |
And I think, I have to say,
link |
sometimes I feel that some of us in the academic bubble
link |
are too arrogant about this and somehow think,
link |
oh, it's just people who aren't as educated
link |
as the dots are pulled.
link |
When we are often just as gullible also,
link |
we read only our media and don't see through things.
link |
Anyone who looks at both sides like this
link |
and compares a little will immediately start noticing
link |
the shenanigans being pulled.
link |
And I think what I tried to do with this app
link |
is that the big tech has to some extent
link |
tried to blame the individual for being manipulated,
link |
much like big tobacco tried to blame the individuals
link |
entirely for smoking.
link |
And then later on, our government stepped up and say,
link |
actually, you can't just blame little kids
link |
for starting to smoke.
link |
We have to have more responsible advertising
link |
and this and that.
link |
I think it's a bit the same here.
link |
It's very convenient for a big tech to blame.
link |
So it's just people who are so dumb and get fooled.
link |
The blame usually comes in saying,
link |
oh, it's just human psychology.
link |
People just wanna hear what they already believe.
link |
But professor David Rand at MIT actually partly debunked that
link |
with a really nice study showing that people
link |
tend to be interested in hearing things
link |
that go against what they believe,
link |
if it's presented in a respectful way.
link |
Suppose, for example, that you have a company
link |
and you're just about to launch this project
link |
and you're convinced it's gonna work.
link |
And someone says, you know, Lex,
link |
I hate to tell you this, but this is gonna fail.
link |
Would you be like, shut up, I don't wanna hear it.
link |
La, la, la, la, la, la, la, la, la.
link |
You would be interested, right?
link |
And also if you're on an airplane,
link |
back in the pre COVID times,
link |
and the guy next to you
link |
is clearly from the opposite side of the political spectrum,
link |
but is very respectful and polite to you.
link |
Wouldn't you be kind of interested to hear a bit about
link |
how he or she thinks about things?
link |
But it's not so easy to find out
link |
respectful disagreement now,
link |
because like, for example, if you are a Democrat
link |
and you're like, oh, I wanna see something
link |
on the other side,
link |
so you just go Breitbart.com.
link |
And then after the first 10 seconds,
link |
you feel deeply insulted by something.
link |
And they, it's not gonna work.
link |
Or if you take someone who votes Republican
link |
and they go to something on the left,
link |
then they just get very offended very quickly
link |
by them having put a deliberately ugly picture
link |
of Donald Trump on the front page or something.
link |
It doesn't really work.
link |
So this news aggregator also has this nuance slider,
link |
which you can pull to the right
link |
and then sort of make it easier to get exposed
link |
to actually more sort of academic style
link |
or more respectful,
link |
portrayals of different views.
link |
And finally, the one kind of bias
link |
I think people are mostly aware of is the left, right,
link |
because it's so obvious,
link |
because both left and right are very powerful here, right?
link |
Both of them have well funded TV stations and newspapers,
link |
and it's kind of hard to miss.
link |
But there's another one, the establishment slider,
link |
which is also really fun.
link |
I love to play with it.
link |
And that's more about corruption.
link |
I love that one. Yes.
link |
Because if you have a society
link |
where almost all the powerful entities
link |
want you to believe a certain thing,
link |
that's what you're gonna read in both the big media,
link |
mainstream media on the left and on the right, of course.
link |
And the powerful companies can push back very hard,
link |
like tobacco companies push back very hard
link |
back in the day when some newspapers
link |
started writing articles about tobacco being dangerous,
link |
so that it was hard to get a lot of coverage
link |
about it initially.
link |
And also if you look geopolitically, right,
link |
of course, in any country, when you read their media,
link |
you're mainly gonna be reading a lot of articles
link |
about how our country is the good guy
link |
and the other countries are the bad guys, right?
link |
So if you wanna have a really more nuanced understanding,
link |
like the Germans used to be told that the British
link |
used to be told that the French were the bad guys
link |
and the French used to be told
link |
that the British were the bad guys.
link |
Now they visit each other's countries a lot
link |
and have a much more nuanced understanding.
link |
I don't think there's gonna be any more wars
link |
between France and Germany.
link |
But on the geopolitical scale,
link |
it's just as much as ever, you know,
link |
big Cold War, now US, China, and so on.
link |
And if you wanna get a more nuanced understanding
link |
of what's happening geopolitically,
link |
then it's really fun to look at this establishment slider
link |
because it turns out there are tons of little newspapers,
link |
both on the left and on the right,
link |
who sometimes challenge establishment and say,
link |
you know, maybe we shouldn't actually invade Iraq right now.
link |
Maybe this weapons of mass destruction thing is BS.
link |
If you look at the journalism research afterwards,
link |
you can actually see that quite clearly.
link |
Both CNN and Fox were very pro.
link |
Let's get rid of Saddam.
link |
There are weapons of mass destruction.
link |
Then there were a lot of smaller newspapers.
link |
They were like, wait a minute,
link |
this evidence seems a bit sketchy and maybe we...
link |
But of course they were so hard to find.
link |
Most people didn't even know they existed, right?
link |
Yet it would have been better for American national security
link |
if those voices had also come up.
link |
I think it harmed America's national security actually
link |
that we invaded Iraq.
link |
And arguably there's a lot more interest
link |
in that kind of thinking too, from those small sources.
link |
So like when you say big,
link |
it's more about kind of the reach of the broadcast,
link |
but it's not big in terms of the interest.
link |
I think there's a lot of interest
link |
in that kind of anti establishment
link |
or like skepticism towards, you know,
link |
out of the box thinking.
link |
There's a lot of interest in that kind of thing.
link |
Do you see this news project or something like it
link |
being basically taken over the world
link |
as the main way we consume information?
link |
Like how do we get there?
link |
Like how do we, you know?
link |
So, okay, the idea is brilliant.
link |
It's a, you're calling it your little project in 2020,
link |
but how does that become the new way we consume information?
link |
I hope, first of all, just to plant a little seed there
link |
because normally the big barrier of doing anything in media
link |
is you need a ton of money, but this costs no money at all.
link |
I've just been paying myself.
link |
You pay a tiny amount of money each month to Amazon
link |
to run the thing in their cloud.
link |
We're not, there never will never be any ads.
link |
The point is not to make any money off of it.
link |
And we just train machine learning algorithms
link |
to classify the articles and stuff.
link |
So it just kind of runs by itself.
link |
So if it actually gets good enough at some point
link |
that it starts catching on, it could scale.
link |
And if other people carbon copy it
link |
and make other versions that are better,
link |
that's the more the merrier.
link |
I think there's a real opportunity for machine learning
link |
to empower the individual against the powerful players.
link |
As I said in the beginning here, it's
link |
been mostly the other way around so far,
link |
that the big players have the AI and then they tell people,
link |
this is the truth, this is how it is.
link |
But it can just as well go the other way around.
link |
And when the internet was born, actually, a lot of people
link |
had this hope that maybe this will be
link |
a great thing for democracy, make it easier
link |
to find out about things.
link |
And maybe machine learning and things like this
link |
can actually help again.
link |
And I have to say, I think it's more important than ever now
link |
because this is very linked also to the whole future of life
link |
as we discussed earlier.
link |
We're getting this ever more powerful tech.
link |
Frank, it's pretty clear if you look
link |
on the one or two generation, three generation timescale
link |
that there are only two ways this can end geopolitically.
link |
Either it ends great for all humanity
link |
or it ends terribly for all of us.
link |
There's really no in between.
link |
And we're so stuck in that because technology
link |
And you can't have people fighting
link |
when the weapons just keep getting ever more
link |
powerful indefinitely.
link |
Eventually, the luck runs out.
link |
And right now we have, I love America,
link |
but the fact of the matter is what's good for America
link |
is not opposite in the long term to what's
link |
good for other countries.
link |
It would be if this was some sort of zero sum game
link |
like it was thousands of years ago when the only way one
link |
country could get more resources was
link |
to take land from other countries
link |
because that was basically the resource.
link |
Look at the map of Europe.
link |
Some countries kept getting bigger and smaller,
link |
But then since 1945, there hasn't been any war
link |
in Western Europe.
link |
And they all got way richer because of tech.
link |
So the optimistic outcome is that the big winner
link |
in this century is going to be America and China and Russia
link |
and everybody else because technology just makes
link |
us all healthier and wealthier.
link |
And we just find some way of keeping the peace
link |
But I think, unfortunately, there
link |
are some pretty powerful forces right now
link |
that are pushing in exactly the opposite direction
link |
and trying to demonize other countries, which just makes
link |
it more likely that this ever more powerful tech we're
link |
building is going to be used in disastrous ways.
link |
Yeah, for aggression versus cooperation,
link |
that kind of thing.
link |
Yeah, even look at just military AI now.
link |
It was so awesome to see these dancing robots.
link |
But one of the biggest growth areas in robotics
link |
now is, of course, autonomous weapons.
link |
And 2020 was like the best marketing year
link |
ever for autonomous weapons.
link |
Because in both Libya, it's a civil war,
link |
and in Nagorno Karabakh, they made the decisive difference.
link |
And everybody else is watching this.
link |
Oh, yeah, we want to build autonomous weapons, too.
link |
In Libya, you had, on one hand, our ally,
link |
the United Arab Emirates that were flying
link |
their autonomous weapons that they bought from China,
link |
And on the other side, you had our other ally, Turkey,
link |
flying their drones.
link |
And they had no skin in the game,
link |
any of these other countries.
link |
And of course, it was the Libyans who really got screwed.
link |
In Nagorno Karabakh, you had actually, again,
link |
Turkey is sending drones built by this company that
link |
was actually founded by a guy who went to MIT AeroAstro.
link |
So MIT has a direct responsibility
link |
for ultimately this.
link |
And a lot of civilians were killed there.
link |
So because it was militarily so effective,
link |
now suddenly there's a huge push.
link |
Oh, yeah, yeah, let's go build ever more autonomy
link |
into these weapons, and it's going to be great.
link |
And I think, actually, people who
link |
are obsessed about some sort of future Terminator scenario
link |
right now should start focusing on the fact
link |
that we have two much more urgent threats happening
link |
from machine learning.
link |
One of them is the whole destruction of democracy
link |
that we've talked about now, where
link |
our flow of information is being manipulated
link |
by machine learning.
link |
And the other one is that right now,
link |
this is the year when the big arms race and out of control
link |
arms race in at least Thomas Weapons is going to start,
link |
or it's going to stop.
link |
So you have a sense that there is like 2020
link |
was an instrumental catalyst for the autonomous weapons race.
link |
Yeah, because it was the first year when they proved
link |
decisive in the battlefield.
link |
And these ones are still not fully autonomous, mostly.
link |
They're remote controlled, right?
link |
But we could very quickly make things
link |
about the size and cost of a smartphone, which you just put
link |
in the GPS coordinates or the face of the one
link |
you want to kill, a skin color or whatever,
link |
and it flies away and does it.
link |
And the real good reason why the US and all
link |
the other superpowers should put the kibosh on this
link |
is the same reason we decided to put the kibosh on bioweapons.
link |
So we gave the Future of Life Award
link |
that we can talk more about later to Matthew Messelson
link |
from Harvard before for convincing
link |
Nixon to ban bioweapons.
link |
And I asked him, how did you do it?
link |
And he was like, well, I just said, look,
link |
we don't want there to be a $500 weapon of mass destruction
link |
that all our enemies can afford, even nonstate actors.
link |
And Nixon was like, good point.
link |
It's in America's interest that the powerful weapons are all
link |
really expensive, so only we can afford them,
link |
or maybe some more stable adversaries, right?
link |
Nuclear weapons are like that.
link |
But bioweapons were not like that.
link |
That's why we banned them.
link |
And that's why you never hear about them now.
link |
That's why we love biology.
link |
So you have a sense that it's possible for the big power
link |
houses in terms of the big nations in the world
link |
to agree that autonomous weapons is not a race we want to be on,
link |
that it doesn't end well.
link |
Yeah, because we know it's just going
link |
to end in mass proliferation.
link |
And every terrorist everywhere is
link |
going to have these super cheap weapons
link |
that they will use against us.
link |
And our politicians have to constantly worry
link |
about being assassinated every time they go outdoors
link |
by some anonymous little mini drone.
link |
We don't want that.
link |
And even if the US and China and everyone else
link |
could just agree that you can only
link |
build these weapons if they cost at least $10 million,
link |
that would be a huge win for the superpowers
link |
and, frankly, for everybody.
link |
And people often push back and say, well, it's
link |
so hard to prevent cheating.
link |
But hey, you could say the same about bioweapons.
link |
Take any of your MIT colleagues in biology.
link |
Of course, they could build some nasty bioweapon
link |
if they really wanted to.
link |
But first of all, they don't want to
link |
because they think it's disgusting because of the stigma.
link |
And second, even if there's some sort of nutcase and want to,
link |
it's very likely that some of their grad students
link |
or someone would rat them out because everyone else thinks
link |
it's so disgusting.
link |
And in fact, we now know there was even a fair bit of cheating
link |
on the bioweapons ban.
link |
But no countries used them because it was so stigmatized
link |
that it just wasn't worth revealing that they had cheated.
link |
You talk about drones, but you kind of
link |
think that drones is a remote operation.
link |
Which they are, mostly, still.
link |
But you're not taking the next intellectual step
link |
of where does this go.
link |
You're kind of saying the problem with drones
link |
is that you're removing yourself from direct violence.
link |
Therefore, you're not able to sort of maintain
link |
the common humanity required to make
link |
the proper decisions strategically.
link |
But that's the criticism as opposed to like,
link |
if this is automated, and just exactly as you said,
link |
if you automate it and there's a race,
link |
then the technology's gonna get better and better and better
link |
which means getting cheaper and cheaper and cheaper.
link |
And unlike, perhaps, nuclear weapons
link |
which is connected to resources in a way,
link |
like it's hard to engineer, yeah.
link |
It feels like there's too much overlap
link |
between the tech industry and autonomous weapons
link |
to where you could have smartphone type of cheapness.
link |
If you look at drones, for $1,000,
link |
you can have an incredible system
link |
that's able to maintain flight autonomously for you
link |
and take pictures and stuff.
link |
You could see that going into the autonomous weapons space
link |
that's, but why is that not thought about
link |
or discussed enough in the public, do you think?
link |
You see those dancing Boston Dynamics robots
link |
and everybody has this kind of,
link |
as if this is like a far future.
link |
They have this fear like, oh, this'll be Terminator
link |
in like some, I don't know, unspecified 20, 30, 40 years.
link |
And they don't think about, well, this is like
link |
some much less dramatic version of that
link |
is actually happening now.
link |
It's not gonna be legged, it's not gonna be dancing,
link |
but it already has the capability
link |
to use artificial intelligence to kill humans.
link |
Yeah, the Boston Dynamics legged robots,
link |
I think the reason we imagine them holding guns
link |
is just because you've all seen Arnold Schwarzenegger, right?
link |
That's our reference point.
link |
That's pretty useless.
link |
That's not gonna be the main military use of them.
link |
They might be useful in law enforcement in the future
link |
and then there's a whole debate about,
link |
do you want robots showing up at your house with guns
link |
telling you who'll be perfectly obedient
link |
to whatever dictator controls them?
link |
But let's leave that aside for a moment
link |
and look at what's actually relevant now.
link |
So there's a spectrum of things you can do
link |
with AI in the military.
link |
And again, to put my card on the table,
link |
I'm not the pacifist, I think we should have good defense.
link |
So for example, a predator drone is basically
link |
a fancy little remote controlled airplane, right?
link |
There's a human piloting it and the decision ultimately
link |
about whether to kill somebody with it
link |
is made by a human still.
link |
And this is a line I think we should never cross.
link |
There's a current DOD policy.
link |
Again, you have to have a human in the loop.
link |
I think algorithms should never make life
link |
or death decisions, they should be left to humans.
link |
Now, why might we cross that line?
link |
Well, first of all, these are expensive, right?
link |
So for example, when Azerbaijan had all these drones
link |
and Armenia didn't have any, they start trying
link |
to jerry rig little cheap things, fly around.
link |
But then of course, the Armenians would jam them
link |
or the Azeris would jam them.
link |
And remote control things can be jammed,
link |
that makes them inferior.
link |
Also, there's a bit of a time delay between,
link |
if we're piloting something from far away,
link |
speed of light, and the human has a reaction time as well,
link |
it would be nice to eliminate that jamming possibility
link |
in the time that they by having it fully autonomous.
link |
But now you might be, so then if you do,
link |
but now you might be crossing that exact line.
link |
You might program it to just, oh yeah, the air drone,
link |
go hover over this country for a while
link |
and whenever you find someone who is a bad guy,
link |
Now the machine is making these sort of decisions
link |
and some people who defend this still say,
link |
well, that's morally fine because we are the good guys
link |
and we will tell it the definition of bad guy
link |
that we think is moral.
link |
But now it would be very naive to think
link |
that if ISIS buys that same drone,
link |
that they're gonna use our definition of bad guy.
link |
Maybe for them, bad guy is someone wearing
link |
a US army uniform or maybe there will be some,
link |
weird ethnic group who decides that someone
link |
of another ethnic group, they are the bad guys, right?
link |
The thing is human soldiers with all our faults,
link |
we still have some basic wiring in us.
link |
Like, no, it's not okay to kill kids and civilians.
link |
And Thomas Weprin has none of that.
link |
It's just gonna do whatever is programmed.
link |
It's like the perfect Adolf Eichmann on steroids.
link |
Like they told him, Adolf Eichmann, you know,
link |
he wanted to do this and this and this
link |
to make the Holocaust more efficient.
link |
And he was like, yeah, and off he went and did it, right?
link |
Do we really wanna make machines that are like that,
link |
like completely amoral and we'll take the user's definition
link |
of who is the bad guy?
link |
And do we then wanna make them so cheap
link |
that all our adversaries can have them?
link |
Like what could possibly go wrong?
link |
That's I think the big ordeal of the whole thing.
link |
I think the big argument for why we wanna,
link |
this year really put the kibosh on this.
link |
And I think you can tell there's a lot
link |
of very active debate even going on within the US military
link |
and undoubtedly in other militaries around the world also
link |
about whether we should have some sort
link |
of international agreement to at least require
link |
that these weapons have to be above a certain size
link |
and cost, you know, so that things just don't totally spiral
link |
And finally, just for your question,
link |
but is it possible to stop it?
link |
Because some people tell me, oh, just give up, you know.
link |
But again, so Matthew Messelsen again from Harvard, right,
link |
who the bioweapons hero, he had exactly this criticism
link |
also with bioweapons.
link |
People were like, how can you check for sure
link |
that the Russians aren't cheating?
link |
And he told me this, I think really ingenious insight.
link |
He said, you know, Max, some people
link |
think you have to have inspections and things
link |
and you have to make sure that you can catch the cheaters
link |
You don't need 100%, he said.
link |
1% is usually enough.
link |
Because if it's another big state,
link |
suppose China and the US have signed the treaty drawing
link |
a certain line and saying, yeah, these kind of drones are OK,
link |
but these fully autonomous ones are not.
link |
Now suppose you are China and you have cheated and secretly
link |
developed some clandestine little thing
link |
or you're thinking about doing it.
link |
What's your calculation that you do?
link |
Well, you're like, OK, what's the probability
link |
that we're going to get caught?
link |
If the probability is 100%, of course, we're not going to do it.
link |
But if the probability is 5% that we're going to get caught,
link |
then it's going to be like a huge embarrassment for us.
link |
And we still have our nuclear weapons anyway,
link |
so it doesn't really make an enormous difference in terms
link |
of deterring the US.
link |
And that feeds the stigma that you kind of established,
link |
like this fabric, this universal stigma over the thing.
link |
It's very reasonable for them to say, well, we probably
link |
If we don't, then the US will know we cheated,
link |
and then they're going to go full tilt with their program
link |
and say, look, the Chinese are cheaters,
link |
and now we have all these weapons against us,
link |
So the stigma alone is very, very powerful.
link |
And again, look what happened with bioweapons.
link |
It's been 50 years now.
link |
When was the last time you read about a bioterrorism attack?
link |
The only deaths I really know about with bioweapons
link |
that have happened when we Americans managed
link |
to kill some of our own with anthrax,
link |
or the idiot who sent them to Tom Daschle and others
link |
in letters, right?
link |
And similarly in Sverdlovsk in the Soviet Union,
link |
they had some anthrax in some lab there.
link |
Maybe they were cheating or who knows,
link |
and it leaked out and killed a bunch of Russians.
link |
I'd say that's a pretty good success, right?
link |
50 years, just two own goals by the superpowers,
link |
And that's why whenever I ask anyone
link |
what they think about biology, they think it's great.
link |
They associate it with new cures, new diseases,
link |
maybe a good vaccine.
link |
This is how I want to think about AI in the future.
link |
And I want others to think about AI too,
link |
as a source of all these great solutions to our problems,
link |
not as, oh, AI, oh yeah, that's the reason
link |
I feel scared going outside these days.
link |
Yeah, it's kind of brilliant that bioweapons
link |
and nuclear weapons, we've figured out,
link |
I mean, of course there's still a huge source of danger,
link |
but we figured out some way of creating rules
link |
and social stigma over these weapons
link |
that then creates a stability to our,
link |
whatever that game theoretic stability that occurs.
link |
And we don't have that with AI,
link |
and you're kind of screaming from the top of the mountain
link |
about this, that we need to find that
link |
because it's very possible with the future of life,
link |
as you point out, Institute Awards pointed out
link |
that with nuclear weapons,
link |
we could have destroyed ourselves quite a few times.
link |
And it's a learning experience that is very costly.
link |
We gave this Future Life Award,
link |
we gave it the first time to this guy, Vasily Arkhipov.
link |
He was on, most people haven't even heard of him.
link |
Yeah, can you say who he is?
link |
Vasily Arkhipov, he has, in my opinion,
link |
made the greatest positive contribution to humanity
link |
of any human in modern history.
link |
And maybe it sounds like hyperbole here,
link |
like I'm just over the top,
link |
but let me tell you the story and I think maybe you'll agree.
link |
So during the Cuban Missile Crisis,
link |
we Americans first didn't know
link |
that the Russians had sent four submarines,
link |
but we caught two of them.
link |
And we didn't know that,
link |
so we dropped practice depth charges
link |
on the one that he was on,
link |
try to force it to the surface.
link |
But we didn't know that this nuclear submarine
link |
actually was a nuclear submarine with a nuclear torpedo.
link |
We also didn't know that they had authorization
link |
to launch it without clearance from Moscow.
link |
And we also didn't know
link |
that they were running out of electricity.
link |
Their batteries were almost dead.
link |
They were running out of oxygen.
link |
Sailors were fainting left and right.
link |
The temperature was about 110, 120 Fahrenheit on board.
link |
It was really hellish conditions,
link |
really just a kind of doomsday.
link |
And at that point,
link |
these giant explosions start happening
link |
from the Americans dropping these.
link |
The captain thought World War III had begun.
link |
They decided they were gonna launch the nuclear torpedo.
link |
And one of them shouted,
link |
we're all gonna die,
link |
but we're not gonna disgrace our Navy.
link |
We don't know what would have happened
link |
if there had been a giant mushroom cloud all of a sudden
link |
against the Americans.
link |
But since everybody had their hands on the triggers,
link |
you don't have to be too creative to think
link |
that it could have led to an all out nuclear war,
link |
in which case we wouldn't be having this conversation now.
link |
What actually took place was
link |
they needed three people to approve this.
link |
The captain had said yes.
link |
There was the Communist Party political officer.
link |
He also said, yes, let's do it.
link |
And the third man was this guy, Vasily Arkhipov,
link |
For some reason, he was just more chill than the others
link |
and he was the right man at the right time.
link |
I don't want us as a species rely on the right person
link |
being there at the right time, you know.
link |
We tracked down his family
link |
living in relative poverty outside Moscow.
link |
When he flew his daughter,
link |
he had passed away and flew them to London.
link |
They had never been to the West even.
link |
It was incredibly moving to get to honor them for this.
link |
The next year we gave them a medal.
link |
The next year we gave this Future Life Award
link |
to Stanislav Petrov.
link |
Have you heard of him?
link |
So he was in charge of the Soviet early warning station,
link |
which was built with Soviet technology
link |
and honestly not that reliable.
link |
It said that there were five US missiles coming in.
link |
Again, if they had launched at that point,
link |
we probably wouldn't be having this conversation.
link |
He decided based on just mainly gut instinct
link |
to just not escalate this.
link |
And I'm very glad he wasn't replaced by an AI
link |
that was just automatically following orders.
link |
And then we gave the third one to Matthew Messelson.
link |
Last year, we gave this award to these guys
link |
who actually use technology for good,
link |
not avoiding something bad, but for something good.
link |
The guys who eliminated this disease,
link |
it was way worse than COVID that had killed
link |
half a billion people in its final century.
link |
So you mentioned it earlier.
link |
COVID on average kills less than 1% of people who get it.
link |
Smallpox, about 30%.
link |
And they just ultimately, Viktor Zhdanov and Bill Foege,
link |
most of my colleagues have never heard of either of them,
link |
one American, one Russian, they did this amazing effort
link |
not only was Zhdanov able to get the US and the Soviet Union
link |
to team up against smallpox during the Cold War,
link |
but Bill Foege came up with this ingenious strategy
link |
for making it actually go all the way
link |
to defeat the disease without funding
link |
for vaccinating everyone.
link |
And as a result, we haven't had any,
link |
we went from 15 million deaths the year
link |
I was born in smallpox.
link |
So what do we have in COVID now?
link |
A little bit short of 2 million, right?
link |
To zero deaths, of course, this year and forever.
link |
There have been 200 million people,
link |
we estimate, who would have died since then by smallpox
link |
had it not been for this.
link |
So isn't science awesome when you use it for good?
link |
The reason we wanna celebrate these sort of people
link |
is to remind them of this.
link |
Science is so awesome when you use it for good.
link |
And those awards actually, the variety there,
link |
it's a very interesting picture.
link |
So the first two are looking at,
link |
it's kind of exciting to think that these average humans
link |
in some sense, they're products of billions
link |
of other humans that came before them, evolution,
link |
and some little, you said gut,
link |
but there's something in there
link |
that stopped the annihilation of the human race.
link |
And that's a magical thing,
link |
but that's like this deeply human thing.
link |
And then there's the other aspect
link |
where that's also very human,
link |
which is to build solution
link |
to the existential crises that we're facing,
link |
like to build it, to take the responsibility
link |
and to come up with different technologies and so on.
link |
And both of those are deeply human,
link |
the gut and the mind, whatever that is that creates.
link |
The best is when they work together.
link |
Arkhipov, I wish I could have met him, of course,
link |
but he had passed away.
link |
He was really a fantastic military officer,
link |
combining all the best traits
link |
that we in America admire in our military.
link |
Because first of all, he was very loyal, of course.
link |
He never even told anyone about this during his whole life,
link |
even though you think he had some bragging rights, right?
link |
But he just was like, this is just business,
link |
just doing my job.
link |
It only came out later after his death.
link |
And second, the reason he did the right thing
link |
was not because he was some sort of liberal
link |
or some sort of, not because he was just,
link |
oh, peace and love.
link |
It was partly because he had been the captain
link |
on another submarine that had a nuclear reactor meltdown.
link |
And it was his heroism that helped contain this.
link |
That's why he died of cancer later also.
link |
But he had seen many of his crew members die.
link |
And I think for him, that gave him this gut feeling
link |
that if there's a nuclear war
link |
between the US and the Soviet Union,
link |
the whole world is gonna go through
link |
what I saw my dear crew members suffer through.
link |
It wasn't just an abstract thing for him.
link |
I think it was real.
link |
And second though, not just the gut, the mind, right?
link |
He was, for some reason, very levelheaded personality
link |
and very smart guy,
link |
which is exactly what we want our best fighter pilots
link |
to be also, right?
link |
I never forget Neil Armstrong when he's landing on the moon
link |
and almost running out of gas.
link |
And he doesn't even change when they say 30 seconds,
link |
he doesn't even change the tone of voice, just keeps going.
link |
Arkhipov, I think was just like that.
link |
So when the explosions start going off
link |
and his captain is screaming and we should nuke them
link |
and all, he's like,
link |
I don't think the Americans are trying to sink us.
link |
I think they're trying to send us a message.
link |
That's pretty bad ass.
link |
Coolness, because he said, if they wanted to sink us,
link |
and he said, listen, listen, it's alternating
link |
one loud explosion on the left, one on the right,
link |
one on the left, one on the right.
link |
He was the only one who noticed this pattern.
link |
And he's like, I think this is,
link |
I'm trying to send us a signal
link |
that they want it to surface
link |
and they're not gonna sink us.
link |
this is how he then managed it ultimately
link |
with his combination of gut
link |
and also just cool analytical thinking,
link |
was able to deescalate the whole thing.
link |
And yeah, so this is some of the best in humanity.
link |
I guess coming back to what we talked about earlier,
link |
it's the combination of the neural network,
link |
the instinctive, with, I'm getting teary up here,
link |
getting emotional, but he was just,
link |
he is one of my superheroes,
link |
having both the heart and the mind combined.
link |
And especially in that time, there's something about the,
link |
I mean, this is a very, in America,
link |
people are used to this kind of idea
link |
of being the individual of like on your own thinking.
link |
I think under, in the Soviet Union under communism,
link |
it's actually much harder to do that.
link |
Oh yeah, he didn't even, he even got,
link |
he didn't get any accolades either
link |
when he came back for this, right?
link |
They just wanted to hush the whole thing up.
link |
Yeah, there's echoes of that with Chernobyl,
link |
there's all kinds of,
link |
that's one, that's a really hopeful thing
link |
that amidst big centralized powers,
link |
whether it's companies or states,
link |
there's still the power of the individual
link |
to think on their own, to act.
link |
But I think we need to think of people like this,
link |
not as a panacea we can always count on,
link |
but rather as a wake up call.
link |
So because of them, because of Arkhipov,
link |
we are alive to learn from this lesson,
link |
to learn from the fact that we shouldn't keep playing
link |
Russian roulette and almost have a nuclear war
link |
by mistake now and then,
link |
because relying on luck is not a good longterm strategy.
link |
If you keep playing Russian roulette over and over again,
link |
the probability of surviving just drops exponentially
link |
And if you have some probability
link |
of having an accidental nuke war every year,
link |
the probability of not having one also drops exponentially.
link |
I think we can do better than that.
link |
So I think the message is very clear,
link |
once in a while shit happens,
link |
and there's a lot of very concrete things we can do
link |
to reduce the risk of things like that happening
link |
in the first place.
link |
On the AI front, if we just link on that for a second.
link |
So you're friends with, you often talk with Elon Musk
link |
throughout history, you've did a lot
link |
of interesting things together.
link |
He has a set of fears about the future
link |
of artificial intelligence, AGI.
link |
Do you have a sense, we've already talked about
link |
the things we should be worried about with AI,
link |
do you have a sense of the shape of his fears
link |
in particular about AI,
link |
of which subset of what we've talked about,
link |
whether it's creating, it's that direction
link |
of creating sort of these giant competition systems
link |
that are not explainable,
link |
they're not intelligible intelligence,
link |
And then like as a branch of that,
link |
is it the manipulation by big corporations of that
link |
or individual evil people to use that for destruction
link |
or the unintentional consequences?
link |
Do you have a sense of where his thinking is on this?
link |
From my many conversations with Elon,
link |
yeah, I certainly have a model of how he thinks.
link |
It's actually very much like the way I think also,
link |
I'll elaborate on it a bit.
link |
I just wanna push back on when you said evil people,
link |
I don't think it's a very helpful concept.
link |
Evil people, sometimes people do very, very bad things,
link |
but they usually do it because they think it's a good thing
link |
because somehow other people had told them
link |
that that was a good thing
link |
or given them incorrect information or whatever, right?
link |
I believe in the fundamental goodness of humanity
link |
that if we educate people well
link |
and they find out how things really are,
link |
people generally wanna do good and be good.
link |
Hence the value alignment,
link |
as opposed to it's about information, about knowledge,
link |
and then once we have that,
link |
we'll likely be able to do good
link |
in the way that's aligned with everybody else
link |
who thinks differently.
link |
Yeah, and it's not just the individual people
link |
So we don't just want people to be educated
link |
to know the way things actually are
link |
and to treat each other well,
link |
but we also need to align other nonhuman entities.
link |
We talked about corporations, there has to be institutions
link |
so that what they do is actually good
link |
for the country they're in
link |
and we should align, make sure that what countries do
link |
is actually good for the species as a whole, et cetera.
link |
Coming back to Elon,
link |
yeah, my understanding of how Elon sees this
link |
is really quite similar to my own,
link |
which is one of the reasons I like him so much
link |
and enjoy talking with him so much.
link |
I feel he's quite different from most people
link |
in that he thinks much more than most people
link |
about the really big picture,
link |
not just what's gonna happen in the next election cycle,
link |
but in millennia, millions and billions of years from now.
link |
And when you look in this more cosmic perspective,
link |
it's so obvious that we are gazing out into this universe
link |
that as far as we can tell is mostly dead
link |
with life being almost imperceptibly tiny perturbation,
link |
and he sees this enormous opportunity
link |
for our universe to come alive,
link |
first to become an interplanetary species.
link |
Mars is obviously just first stop on this cosmic journey.
link |
And precisely because he thinks more long term,
link |
it's much more clear to him than to most people
link |
that what we do with this Russian roulette thing
link |
we keep playing with our nukes is a really poor strategy,
link |
really reckless strategy.
link |
And also that we're just building
link |
these ever more powerful AI systems that we don't understand
link |
is also just a really reckless strategy.
link |
I feel Elon is very much a humanist
link |
in the sense that he wants an awesome future for humanity.
link |
He wants it to be us that control the machines
link |
rather than the machines that control us.
link |
And why shouldn't we insist on that?
link |
We're building them after all, right?
link |
Why should we build things that just make us
link |
into some little cog in the machinery
link |
that has no further say in the matter, right?
link |
That's not my idea of an inspiring future either.
link |
Yeah, if you think on the cosmic scale
link |
in terms of both time and space,
link |
so much is put into perspective.
link |
Whenever I have a bad day, that's what I think about.
link |
It immediately makes me feel better.
link |
It makes me sad that for us individual humans,
link |
at least for now, the ride ends too quickly.
link |
That we don't get to experience the cosmic scale.
link |
Yeah, I mean, I think of our universe sometimes
link |
as an organism that has only begun to wake up a tiny bit,
link |
just like the very first little glimmers of consciousness
link |
you have in the morning when you start coming around.
link |
Before the coffee.
link |
Before the coffee, even before you get out of bed,
link |
before you even open your eyes.
link |
You start to wake up a little bit.
link |
There's something here.
link |
That's very much how I think of where we are.
link |
All those galaxies out there,
link |
I think they're really beautiful,
link |
but why are they beautiful?
link |
They're beautiful because conscious entities
link |
are actually observing them,
link |
experiencing them through our telescopes.
link |
I define consciousness as subjective experience,
link |
whether it be colors or emotions or sounds.
link |
So beauty is an experience.
link |
Meaning is an experience.
link |
Purpose is an experience.
link |
If there was no conscious experience,
link |
observing these galaxies, they wouldn't be beautiful.
link |
If we do something dumb with advanced AI in the future here
link |
and Earth originating, life goes extinct.
link |
And that was it for this.
link |
If there is nothing else with telescopes in our universe,
link |
then it's kind of game over for beauty
link |
and meaning and purpose in our whole universe.
link |
And I think that would be just such
link |
an opportunity lost, frankly.
link |
And I think when Elon points this out,
link |
he gets very unfairly maligned in the media
link |
for all the dumb media bias reasons we talked about.
link |
They want to print precisely the things about Elon
link |
out of context that are really click baity.
link |
He has gotten so much flack
link |
for this summoning the demon statement.
link |
I happen to know exactly the context
link |
because I was in the front row when he gave that talk.
link |
It was at MIT, you'll be pleased to know,
link |
it was the AeroAstro anniversary.
link |
They had Buzz Aldrin there from the moon landing,
link |
a whole house, a Kresge auditorium
link |
packed with MIT students.
link |
And he had this amazing Q&A, it might've gone for an hour.
link |
And they talked about rockets and Mars and everything.
link |
At the very end, this one student
link |
who has actually hit my class asked him, what about AI?
link |
Elon makes this one comment
link |
and they take this out of context, print it, goes viral.
link |
What is it like with AI,
link |
we're summoning the demons, something like that.
link |
And try to cast him as some sort of doom and gloom dude.
link |
You know Elon, he's not the doom and gloom dude.
link |
He is such a positive visionary.
link |
And the whole reason he warns about this
link |
is because he realizes more than most
link |
what the opportunity cost is of screwing up.
link |
That there is so much awesomeness in the future
link |
that we can and our descendants can enjoy
link |
if we don't screw up, right?
link |
I get so pissed off when people try to cast him
link |
as some sort of technophobic Luddite.
link |
And at this point, it's kind of ludicrous
link |
when I hear people say that people who worry about
link |
artificial general intelligence are Luddites
link |
because of course, if you look more closely,
link |
you have some of the most outspoken people making warnings
link |
are people like Professor Stuart Russell from Berkeley
link |
who's written the bestselling AI textbook, you know.
link |
So claiming that he's a Luddite who doesn't understand AI
link |
is the joke is really on the people who said it.
link |
But I think more broadly,
link |
this message is really not sunk in at all.
link |
What it is that people worry about,
link |
they think that Elon and Stuart Russell and others
link |
are worried about the dancing robots picking up an AR 15
link |
and going on a rampage, right?
link |
They think they're worried about robots turning evil.
link |
They're not, I'm not.
link |
The risk is not malice, it's competence.
link |
The risk is just that we build some systems
link |
that are incredibly competent,
link |
which means they're always gonna get
link |
their goals accomplished,
link |
even if they clash with our goals.
link |
Why did we humans drive the West African black rhino extinct?
link |
Is it because we're malicious, evil rhinoceros haters?
link |
No, it's just because our goals didn't align
link |
with the goals of those rhinos
link |
and tough luck for the rhinos, you know.
link |
So the point is just we don't wanna put ourselves
link |
in the position of those rhinos
link |
creating something more powerful than us
link |
if we haven't first figured out how to align the goals.
link |
And I am optimistic.
link |
I think we could do it if we worked really hard on it,
link |
because I spent a lot of time
link |
around intelligent entities that were more intelligent
link |
than me, my mom and my dad.
link |
And I was little and that was fine
link |
because their goals were actually aligned
link |
with mine quite well.
link |
But we've seen today many examples of where the goals
link |
of our powerful systems are not so aligned.
link |
So those click through optimization algorithms
link |
that are polarized social media, right?
link |
They were actually pretty poorly aligned
link |
with what was good for democracy, it turned out.
link |
And again, almost all problems we've had
link |
in the machine learning again came so far,
link |
not from malice, but from poor alignment.
link |
And that's exactly why that's why we should be concerned
link |
about it in the future.
link |
Do you think it's possible that with systems
link |
like Neuralink and brain computer interfaces,
link |
you know, again, thinking of the cosmic scale,
link |
Elon's talked about this, but others have as well
link |
throughout history of figuring out how the exact mechanism
link |
of how to achieve that kind of alignment.
link |
So one of them is having a symbiosis with AI,
link |
which is like coming up with clever ways
link |
where we're like stuck together in this weird relationship,
link |
whether it's biological or in some kind of other way.
link |
Do you think that's a possibility
link |
of having that kind of symbiosis?
link |
Or do we wanna instead kind of focus
link |
on this distinct entities of us humans talking
link |
to these intelligible, self doubting AIs,
link |
maybe like Stuart Russell thinks about it,
link |
like we're self doubting and full of uncertainty
link |
and our AI systems are full of uncertainty.
link |
We communicate back and forth
link |
and in that way achieve symbiosis.
link |
I honestly don't know.
link |
I would say that because we don't know for sure
link |
what if any of our, which of any of our ideas will work.
link |
But we do know that if we don't,
link |
I'm pretty convinced that if we don't get any
link |
of these things to work and just barge ahead,
link |
then our species is, you know,
link |
probably gonna go extinct this century.
link |
This century, you think like,
link |
you think we're facing this crisis
link |
is a 21st century crisis.
link |
Like this century will be remembered.
link |
But on a hard drive and a hard drive somewhere
link |
or maybe by future generations is like,
link |
like there'll be future Future of Life Institute awards
link |
for people that have done something about AI.
link |
It could also end even worse,
link |
whether we're not superseded
link |
by leaving any AI behind either.
link |
We just totally wipe out, you know,
link |
like on Easter Island.
link |
Our century is long.
link |
You know, there are still 79 years left of it, right?
link |
Think about how far we've come just in the last 30 years.
link |
So we can talk more about what might go wrong,
link |
but you asked me this really good question
link |
about what's the best strategy.
link |
Is it Neuralink or Russell's approach or whatever?
link |
I think, you know, when we did the Manhattan project,
link |
we didn't know if any of our four ideas
link |
for enriching uranium and getting out the uranium 235
link |
But we felt this was really important
link |
to get it before Hitler did.
link |
So, you know what we did?
link |
We tried all four of them.
link |
Here, I think it's analogous
link |
where there's the greatest threat
link |
that's ever faced our species.
link |
And of course, US national security by implication.
link |
We don't know if we don't have any method
link |
that's guaranteed to work, but we have a lot of ideas.
link |
So we should invest pretty heavily
link |
in pursuing all of them with an open mind
link |
and hope that one of them at least works.
link |
These are, the good news is the century is long,
link |
and it might take decades
link |
until we have artificial general intelligence.
link |
So we have some time hopefully,
link |
but it takes a long time to solve
link |
these very, very difficult problems.
link |
It's gonna actually be the,
link |
it's the most difficult problem
link |
we were ever trying to solve as a species.
link |
So we have to start now.
link |
So we don't have, rather than begin thinking about it
link |
the night before some people who've had too much Red Bull
link |
And we have to, coming back to your question,
link |
we have to pursue all of these different avenues and see.
link |
If you were my investment advisor
link |
and I was trying to invest in the future,
link |
how do you think the human species
link |
is most likely to destroy itself in the century?
link |
Yeah, so if the crises,
link |
many of the crises we're facing are really before us
link |
within the next hundred years,
link |
how do we make explicit,
link |
make known the unknowns and solve those problems
link |
to avoid the biggest,
link |
starting with the biggest existential crisis?
link |
So as your investment advisor,
link |
how are you planning to make money on us
link |
destroying ourselves?
link |
It might be the Russian origins.
link |
Somehow it's involved.
link |
At the micro level of detailed strategies,
link |
of course, these are unsolved problems.
link |
we can break it into three sub problems
link |
that are all unsolved.
link |
I think you want first to make machines
link |
understand our goals,
link |
then adopt our goals and then retain our goals.
link |
So to hit on all three real quickly.
link |
The problem when Andreas Lubitz told his autopilot
link |
to fly into the Alps was that the computer
link |
didn't even understand anything about his goals.
link |
It could have understood actually,
link |
but you would have had to put some effort in
link |
as a systems designer to don't fly into mountains.
link |
So that's the first challenge.
link |
How do you program into computers human values,
link |
We can start rather than saying,
link |
We should start with the simple stuff, as I said,
link |
self driving cars, airplanes,
link |
just put in all the goals that we all agree on already,
link |
and then have a habit of whenever machines get smarter
link |
so they can understand one level higher goals,
link |
The second challenge is getting them to adopt the goals.
link |
It's easy for situations like that
link |
where you just program it in,
link |
but when you have self learning systems like children,
link |
you know, any parent knows
link |
that there was a difference between getting our kids
link |
to understand what we want them to do
link |
and to actually adopt our goals, right?
link |
With humans, with children, fortunately,
link |
they go through this phase.
link |
First, they're too dumb to understand
link |
what we want our goals are.
link |
And then they have this period of some years
link |
when they're both smart enough to understand them
link |
and malleable enough that we have a chance
link |
to raise them well.
link |
And then they become teenagers kind of too late.
link |
But we have this window with machines,
link |
the challenges, the intelligence might grow so fast
link |
that that window is pretty short.
link |
So that's a research problem.
link |
The third one is how do you make sure they keep the goals
link |
if they keep learning more and getting smarter?
link |
Many sci fi movies are about how you have something
link |
in which initially was aligned,
link |
but then things kind of go off keel.
link |
And, you know, my kids were very, very excited
link |
about their Legos when they were little.
link |
Now they're just gathering dust in the basement.
link |
If we create machines that are really on board
link |
with the goal of taking care of humanity,
link |
we don't want them to get as bored with us
link |
as my kids got with Legos.
link |
So this is another research challenge.
link |
How can you make some sort of recursively
link |
self improving system retain certain basic goals?
link |
That said, a lot of adult people still play with Legos.
link |
So maybe we succeeded with the Legos.
link |
Maybe, I like your optimism.
link |
So not all AI systems have to maintain the goals, right?
link |
Just some fraction.
link |
Yeah, so there's a lot of talented AI researchers now
link |
who have heard of this and want to work on it.
link |
Not so much funding for it yet.
link |
Of the billions that go into building AI more powerful,
link |
it's only a minuscule fraction
link |
so far going into this safety research.
link |
My attitude is generally we should not try to slow down
link |
the technology, but we should greatly accelerate
link |
the investment in this sort of safety research.
link |
And also, this was very embarrassing last year,
link |
but the NSF decided to give out
link |
six of these big institutes.
link |
We got one of them for AI and science, you asked me about.
link |
Another one was supposed to be for AI safety research.
link |
And they gave it to people studying oceans
link |
and climate and stuff.
link |
So I'm all for studying oceans and climates,
link |
but we need to actually have some money
link |
that actually goes into AI safety research also
link |
and doesn't just get grabbed by whatever.
link |
That's a fantastic investment.
link |
And then at the higher level, you asked this question,
link |
okay, what can we do?
link |
What are the biggest risks?
link |
I think we cannot just consider this
link |
to be only a technical problem.
link |
Again, because if you solve only the technical problem,
link |
can I play with your robot?
link |
If we can get our machines to just blindly obey
link |
the orders we give them,
link |
so we can always trust that it will do what we want.
link |
That might be great for the owner of the robot.
link |
That might not be so great for the rest of humanity
link |
if that person is that least favorite world leader
link |
or whatever you imagine, right?
link |
So we have to also take a look at the,
link |
apply alignment, not just to machines,
link |
but to all the other powerful structures.
link |
That's why it's so important
link |
to strengthen our democracy again,
link |
as I said, to have institutions,
link |
make sure that the playing field is not rigged
link |
so that corporations are given the right incentives
link |
to do the things that both make profit
link |
and are good for people,
link |
to make sure that countries have incentives
link |
to do things that are both good for their people
link |
and don't screw up the rest of the world.
link |
And this is not just something for AI nerds to geek out on.
link |
This is an interesting challenge for political scientists,
link |
economists, and so many other thinkers.
link |
So one of the magical things
link |
that perhaps makes this earth quite unique
link |
is that it's home to conscious beings.
link |
So you mentioned consciousness.
link |
Perhaps as a small aside,
link |
because we didn't really get specific
link |
to how we might do the alignment.
link |
is there just a really important research problem,
link |
but do you think engineering consciousness
link |
into AI systems is a possibility,
link |
is something that we might one day do,
link |
or is there something fundamental to consciousness
link |
that is, is there something about consciousness
link |
that is fundamental to humans and humans only?
link |
I think it's possible.
link |
I think both consciousness and intelligence
link |
are information processing.
link |
Certain types of information processing.
link |
And that fundamentally,
link |
it doesn't matter whether the information is processed
link |
by carbon atoms in the neurons and brains
link |
or by silicon atoms and so on in our technology.
link |
Some people disagree.
link |
This is what I think as a physicist.
link |
That consciousness is the same kind of,
link |
you said consciousness is information processing.
link |
So meaning, I think you had a quote of something like
link |
it's information knowing itself, that kind of thing.
link |
I think consciousness is, yeah,
link |
is the way information feels when it's being processed.
link |
One's being put in complex ways.
link |
We don't know exactly what those complex ways are.
link |
It's clear that most of the information processing
link |
in our brains does not create an experience.
link |
We're not even aware of it, right?
link |
you're not aware of your heartbeat regulation right now,
link |
even though it's clearly being done by your body, right?
link |
It's just kind of doing its own thing.
link |
When you go jogging,
link |
there's a lot of complicated stuff
link |
about how you put your foot down and we know it's hard.
link |
That's why robots used to fall over so much,
link |
but you're mostly unaware about it.
link |
Your brain, your CEO consciousness module
link |
just sends an email,
link |
hey, I'm gonna keep jogging along this path.
link |
The rest is on autopilot, right?
link |
So most of it is not conscious,
link |
but somehow there is some of the information processing,
link |
which is we don't know what exactly.
link |
I think this is a science problem
link |
that I hope one day we'll have some equation for
link |
or something so we can be able to build
link |
a consciousness detector and say, yeah,
link |
here there is some consciousness, here there's not.
link |
Oh, don't boil that lobster because it's feeling pain
link |
or it's okay because it's not feeling pain.
link |
Right now we treat this as sort of just metaphysics,
link |
but it would be very useful in emergency rooms
link |
to know if a patient has locked in syndrome
link |
and is conscious or if they are actually just out.
link |
And in the future, if you build a very, very intelligent
link |
helper robot to take care of you,
link |
I think you'd like to know
link |
if you should feel guilty about shutting it down
link |
or if it's just like a zombie going through the motions
link |
like a fancy tape recorder, right?
link |
And once we can make progress
link |
on the science of consciousness
link |
and figure out what is conscious and what isn't,
link |
then assuming we want to create positive experiences
link |
and not suffering, we'll probably choose to build
link |
some machines that are deliberately unconscious
link |
that do incredibly boring, repetitive jobs
link |
in an iron mine somewhere or whatever.
link |
And maybe we'll choose to create helper robots
link |
for the elderly that are conscious
link |
so that people don't just feel creeped out
link |
that the robot is just faking it
link |
when it acts like it's sad or happy.
link |
Like you said, elderly,
link |
I think everybody gets pretty deeply lonely in this world.
link |
And so there's a place I think for everybody
link |
to have a connection with conscious beings,
link |
whether they're human or otherwise.
link |
But I know for sure that I would,
link |
if I had a robot, if I was gonna develop any kind
link |
of personal emotional connection with it,
link |
I would be very creeped out
link |
if I knew it in an intellectual level
link |
that the whole thing was just a fraud.
link |
Now today you can buy a little talking doll for a kid
link |
which will say things and the little child will often think
link |
that this is actually conscious
link |
and even real secrets to it that then go on the internet
link |
and with lots of the creepy repercussions.
link |
I would not wanna be just hacked and tricked like this.
link |
If I was gonna be developing real emotional connections
link |
with the robot, I would wanna know
link |
that this is actually real.
link |
It's acting conscious, acting happy
link |
because it actually feels it.
link |
And I think this is not sci fi.
link |
I think it's possible to measure, to come up with tools.
link |
After we understand the science of consciousness,
link |
you're saying we'll be able to come up with tools
link |
that can measure consciousness
link |
and definitively say like this thing is experiencing
link |
the things it says it's experiencing.
link |
Kind of by definition.
link |
If it is a physical phenomenon, information processing
link |
and we know that some information processing is conscious
link |
and some isn't, well, then there is something there
link |
to be discovered with the methods of science.
link |
Giulio Tononi has stuck his neck out the farthest
link |
and written down some equations for a theory.
link |
Maybe that's right, maybe it's wrong.
link |
We certainly don't know.
link |
But I applaud that kind of efforts to sort of take this,
link |
say this is not just something that philosophers
link |
can have beer and muse about,
link |
but something we can measure and study.
link |
And coming, bringing that back to us,
link |
I think what we would probably choose to do, as I said,
link |
is if we cannot figure this out,
link |
choose to make, to be quite mindful
link |
about what sort of consciousness, if any,
link |
we put in different machines that we have.
link |
And certainly, we wouldn't wanna make,
link |
we should not be making much machines that suffer
link |
without us even knowing it, right?
link |
And if at any point someone decides to upload themselves
link |
like Ray Kurzweil wants to do,
link |
I don't know if you've had him on your show.
link |
We agree, but then COVID happens,
link |
so we're waiting it out a little bit.
link |
Suppose he uploads himself into this robo Ray
link |
and it talks like him and acts like him and laughs like him.
link |
And before he powers off his biological body,
link |
he would probably be pretty disturbed
link |
if he realized that there's no one home.
link |
This robot is not having any subjective experience, right?
link |
If humanity gets replaced by machine descendants,
link |
which do all these cool things and build spaceships
link |
and go to intergalactic rock concerts,
link |
and it turns out that they are all unconscious,
link |
just going through the motions,
link |
wouldn't that be like the ultimate zombie apocalypse, right?
link |
Just a play for empty benches?
link |
Yeah, I have a sense that there's some kind of,
link |
once we understand consciousness better,
link |
we'll understand that there's some kind of continuum
link |
and it would be a greater appreciation.
link |
And we'll probably understand, just like you said,
link |
it'd be unfortunate if it's a trick.
link |
We'll probably definitely understand
link |
that love is indeed a trick that we'll play on each other,
link |
that we humans are, we convince ourselves we're conscious,
link |
but we're really, us and trees and dolphins
link |
are all the same kind of consciousness.
link |
Can I try to cheer you up a little bit
link |
with a philosophical thought here about the love part?
link |
You know, you might say,
link |
okay, yeah, love is just a collaboration enabler.
link |
And then maybe you can go and get depressed about that.
link |
But I think that would be the wrong conclusion, actually.
link |
You know, I know that the only reason I enjoy food
link |
is because my genes hacked me
link |
and they don't want me to starve to death.
link |
Not because they care about me consciously
link |
enjoying succulent delights of pistachio ice cream,
link |
but they just want me to make copies of them.
link |
The whole thing, so in a sense,
link |
the whole enjoyment of food is also a scam like this.
link |
But does that mean I shouldn't take pleasure
link |
in this pistachio ice cream?
link |
I love pistachio ice cream.
link |
And I can tell you, I know this is an experimental fact.
link |
I enjoy pistachio ice cream every bit as much,
link |
even though I scientifically know exactly why,
link |
what kind of scam this was.
link |
Your genes really appreciate
link |
that you like the pistachio ice cream.
link |
Well, but I, my mind appreciates it too, you know?
link |
And I have a conscious experience right now.
link |
Ultimately, all of my brain is also just something
link |
the genes built to copy themselves.
link |
You know, I'm grateful that,
link |
yeah, thanks genes for doing this,
link |
but you know, now it's my brain that's in charge here
link |
and I'm gonna enjoy my conscious experience,
link |
thank you very much.
link |
And not just the pistachio ice cream,
link |
but also the love I feel for my amazing wife
link |
and all the other delights of being conscious.
link |
I don't, actually Richard Feynman,
link |
I think said this so well.
link |
He is also the guy, you know, really got me into physics.
link |
Some art friend said that,
link |
oh, science kind of just is the party pooper.
link |
It's kind of ruins the fun, right?
link |
When like you have a beautiful flowers as the artist
link |
and then the scientist is gonna deconstruct that
link |
into just a blob of quarks and electrons.
link |
And Feynman pushed back on that in such a beautiful way,
link |
which I think also can be used to push back
link |
and make you not feel guilty about falling in love.
link |
So here's what Feynman basically said.
link |
He said to his friend, you know,
link |
yeah, I can also as a scientist see
link |
that this is a beautiful flower, thank you very much.
link |
Maybe I can't draw as good a painting as you
link |
because I'm not as talented an artist,
link |
but yeah, I can really see the beauty in it.
link |
And it just, it also looks beautiful to me.
link |
But in addition to that, Feynman said, as a scientist,
link |
I see even more beauty that the artist did not see, right?
link |
Suppose this is a flower on a blossoming apple tree.
link |
You could say this tree has more beauty in it
link |
than just the colors and the fragrance.
link |
This tree is made of air, Feynman wrote.
link |
This is one of my favorite Feynman quotes ever.
link |
And it took the carbon out of the air
link |
and bound it in using the flaming heat of the sun,
link |
you know, to turn the air into a tree.
link |
And when you burn logs in your fireplace,
link |
it's really beautiful to think that this is being reversed.
link |
Now the tree is going, the wood is going back into air.
link |
And in this flaming, beautiful dance of the fire
link |
that the artist can see is the flaming light of the sun
link |
that was bound in to turn the air into tree.
link |
And then the ashes is the little residue
link |
that didn't come from the air
link |
that the tree sucked out of the ground, you know.
link |
Feynman said, these are beautiful things.
link |
And science just adds, it doesn't subtract.
link |
And I feel exactly that way about love
link |
and about pistachio ice cream also.
link |
I can understand that there is even more nuance
link |
to the whole thing, right?
link |
At this very visceral level,
link |
you can fall in love just as much as someone
link |
who knows nothing about neuroscience.
link |
But you can also appreciate this even greater beauty in it.
link |
Just like, isn't it remarkable that it came about
link |
from this completely lifeless universe,
link |
just a bunch of hot blob of plasma expanding.
link |
And then over the eons, you know, gradually,
link |
first the strong nuclear force decided
link |
to combine quarks together into nuclei.
link |
And then the electric force bound in electrons
link |
And then they clustered from gravity
link |
and you got planets and stars and this and that.
link |
And then natural selection came along
link |
and the genes had their little thing.
link |
And you started getting what went from seeming
link |
like a completely pointless universe
link |
that we're just trying to increase entropy
link |
and approach heat death into something
link |
that looked more goal oriented.
link |
Isn't that kind of beautiful?
link |
And then this goal orientedness through evolution
link |
got ever more sophisticated where you got ever more.
link |
And then you started getting this thing,
link |
which is kind of like DeepMind's mu zero and steroids,
link |
the ultimate self play is not what DeepMind's AI
link |
does against itself to get better at go.
link |
It's what all these little quark blobs did
link |
against each other in the game of survival of the fittest.
link |
Now, when you had really dumb bacteria
link |
living in a simple environment,
link |
there wasn't much incentive to get intelligent,
link |
but then the life made environment more complex.
link |
And then there was more incentive to get even smarter.
link |
And that gave the other organisms more of incentive
link |
to also get smarter.
link |
And then here we are now,
link |
just like mu zero learned to become world master at go
link |
and chess from playing against itself
link |
by just playing against itself.
link |
All the quirks here on our planet,
link |
the electrons have created giraffes and elephants
link |
and humans and love.
link |
I just find that really beautiful.
link |
And to me, that just adds to the enjoyment of love.
link |
It doesn't subtract anything.
link |
Do you feel a little more careful now?
link |
I feel way better, that was incredible.
link |
So this self play of quirks,
link |
taking back to the beginning of our conversation
link |
a little bit, there's so many exciting possibilities
link |
about artificial intelligence understanding
link |
the basic laws of physics.
link |
Do you think AI will help us unlock?
link |
There's been quite a bit of excitement
link |
throughout the history of physics
link |
of coming up with more and more general simple laws
link |
that explain the nature of our reality.
link |
And then the ultimate of that would be a theory
link |
of everything that combines everything together.
link |
Do you think it's possible that one, we humans,
link |
but perhaps AI systems will figure out a theory of physics
link |
that unifies all the laws of physics?
link |
Yeah, I think it's absolutely possible.
link |
I think it's very clear
link |
that we're gonna see a great boost to science.
link |
We're already seeing a boost actually
link |
from machine learning helping science.
link |
Alpha fold was an example,
link |
the decades old protein folding problem.
link |
So, and gradually, yeah, unless we go extinct
link |
by doing something dumb like we discussed,
link |
I think it's very likely
link |
that our understanding of physics will become so good
link |
that our technology will no longer be limited
link |
by human intelligence,
link |
but instead be limited by the laws of physics.
link |
So our tech today is limited
link |
by what we've been able to invent, right?
link |
I think as AI progresses,
link |
it'll just be limited by the speed of light
link |
and other physical limits,
link |
which would mean it's gonna be just dramatically beyond
link |
Do you think it's a fundamentally mathematical pursuit
link |
of trying to understand like the laws
link |
of our universe from a mathematical perspective?
link |
So almost like if it's AI,
link |
it's exploring the space of like theorems
link |
and those kinds of things,
link |
or is there some other more computational ideas,
link |
more sort of empirical ideas?
link |
They're both, I would say.
link |
It's really interesting to look out at the landscape
link |
of everything we call science today.
link |
So here you come now with this big new hammer.
link |
It says machine learning on it
link |
and that's, you know, where are there some nails
link |
that you can help with here that you can hammer?
link |
Ultimately, if machine learning gets the point
link |
that it can do everything better than us,
link |
it will be able to help across the whole space of science.
link |
But maybe we can anchor it by starting a little bit
link |
right now near term and see how we kind of move forward.
link |
So like right now, first of all,
link |
you have a lot of big data science, right?
link |
Where, for example, with telescopes,
link |
we are able to collect way more data every hour
link |
than a grad student can just pour over
link |
like in the old times, right?
link |
And machine learning is already being used very effectively,
link |
even at MIT, to find planets around other stars,
link |
to detect exciting new signatures
link |
of new particle physics in the sky,
link |
to detect the ripples in the fabric of space time
link |
that we call gravitational waves
link |
caused by enormous black holes
link |
crashing into each other halfway
link |
across the observable universe.
link |
Machine learning is running and ticking right now,
link |
doing all these things,
link |
and it's really helping all these experimental fields.
link |
There is a separate front of physics,
link |
computational physics,
link |
which is getting an enormous boost also.
link |
So we had to do all our computations by hand, right?
link |
People would have these giant books
link |
with tables of logarithms,
link |
and oh my God, it pains me to even think
link |
how long time it would have taken to do simple stuff.
link |
Then we started to get little calculators and computers
link |
that could do some basic math for us.
link |
Now, what we're starting to see is
link |
kind of a shift from GOFI, computational physics,
link |
to neural network, computational physics.
link |
What I mean by that is most computational physics
link |
would be done by humans programming in
link |
the intelligence of how to do the computation
link |
into the computer.
link |
Just as when Garry Kasparov got his posterior kicked
link |
by IBM's Deep Blue in chess,
link |
humans had programmed in exactly how to play chess.
link |
Intelligence came from the humans.
link |
It wasn't learned, right?
link |
Mu zero can be not only Kasparov in chess,
link |
but also Stockfish,
link |
which is the best sort of GOFI chess program.
link |
By learning, and we're seeing more of that now,
link |
that shift beginning to happen in physics.
link |
So let me give you an example.
link |
So lattice QCD is an area of physics
link |
whose goal is basically to take the periodic table
link |
and just compute the whole thing from first principles.
link |
This is not the search for theory of everything.
link |
We already know the theory
link |
that's supposed to produce as output the periodic table,
link |
which atoms are stable, how heavy they are,
link |
all that good stuff, their spectral lines.
link |
It's a theory, lattice QCD,
link |
you can put it on your tshirt.
link |
Our colleague Frank Wilczek
link |
got the Nobel Prize for working on it.
link |
But the math is just too hard for us to solve.
link |
We have not been able to start with these equations
link |
and solve them to the extent that we can predict, oh yeah.
link |
And then there is carbon,
link |
and this is what the spectrum of the carbon atom looks like.
link |
But awesome people are building
link |
these supercomputer simulations
link |
where you just put in these equations
link |
and you make a big cubic lattice of space,
link |
or actually it's a very small lattice
link |
because you're going down to the subatomic scale,
link |
and you try to solve it.
link |
But it's just so computationally expensive
link |
that we still haven't been able to calculate things
link |
as accurately as we measure them in many cases.
link |
And now machine learning is really revolutionizing this.
link |
So my colleague Fiala Shanahan at MIT, for example,
link |
she's been using this really cool
link |
machine learning technique called normalizing flows,
link |
where she's realized she can actually speed up
link |
the calculation dramatically
link |
by having the AI learn how to do things faster.
link |
Another area like this
link |
where we suck up an enormous amount of supercomputer time
link |
to do physics is black hole collisions.
link |
So now that we've done the sexy stuff
link |
of detecting a bunch of this with LIGO and other experiments,
link |
we want to be able to know what we're seeing.
link |
And so it's a very simple conceptual problem.
link |
It's the two body problem.
link |
Newton solved it for classical gravity hundreds of years ago,
link |
but the two body problem is still not fully solved.
link |
Black holes, yes, and Einstein's gravity
link |
because they won't just orbit in space,
link |
they won't just orbit each other forever anymore,
link |
two things, they give off gravitational waves
link |
and make sure they crash into each other.
link |
And the game, what you want to do is you want to figure out,
link |
okay, what kind of wave comes out
link |
as a function of the masses of the two black holes,
link |
as a function of how they're spinning,
link |
relative to each other, et cetera.
link |
And that is so hard.
link |
It can take months of supercomputer time
link |
and massive numbers of cores to do it.
link |
Now, wouldn't it be great if you can use machine learning
link |
to greatly speed that up, right?
link |
Now you can use the expensive old GoFi calculation
link |
as the truth, and then see if machine learning
link |
can figure out a smarter, faster way
link |
of getting the right answer.
link |
Yet another area, like computational physics.
link |
These are probably the big three
link |
that suck up the most computer time.
link |
Lattice QCD, black hole collisions,
link |
and cosmological simulations,
link |
where you take not a subatomic thing
link |
and try to figure out the mass of the proton,
link |
but you take something enormous
link |
and try to look at how all the galaxies get formed in there.
link |
There again, there are a lot of very cool ideas right now
link |
about how you can use machine learning
link |
to do this sort of stuff better.
link |
The difference between this and the big data
link |
is you kind of make the data yourself, right?
link |
So, and then finally,
link |
we're looking over the physics landscape
link |
and seeing what can we hammer with machine learning, right?
link |
So we talked about experimental data, big data,
link |
discovering cool stuff that we humans
link |
then look more closely at.
link |
Then we talked about taking the expensive computations
link |
we're doing now and figuring out
link |
how to do them much faster and better with AI.
link |
And finally, let's go really theoretical.
link |
So things like discovering equations,
link |
having deep fundamental insights,
link |
this is something closest to what I've been doing
link |
We talked earlier about the whole AI Feynman project,
link |
where if you just have some data,
link |
how do you automatically discover equations
link |
that seem to describe this well,
link |
that you can then go back as a human
link |
and then work with and test and explore.
link |
And you asked a really good question also
link |
about if this is sort of a search problem in some sense.
link |
That's very deep actually what you said, because it is.
link |
Suppose I ask you to prove some mathematical theorem.
link |
What is a proof in math?
link |
It's just a long string of steps, logical steps
link |
that you can write out with symbols.
link |
And once you find it, it's very easy to write a program
link |
to check whether it's a valid proof or not.