back to index

Max Tegmark: Life 3.0 | Lex Fridman Podcast #1


small model | large model

link |
00:00:00.000
As part of MIT course 6S099, Artificial General Intelligence,
link |
00:00:04.200
I've gotten the chance to sit down with Max Tegmark.
link |
00:00:06.600
He is a professor here at MIT.
link |
00:00:08.680
He's a physicist, spent a large part of his career
link |
00:00:11.920
studying the mysteries of our cosmological universe.
link |
00:00:16.960
But he's also studied and delved into the beneficial
link |
00:00:20.680
possibilities and the existential risks
link |
00:00:24.000
of artificial intelligence.
link |
00:00:25.800
Amongst many other things, he is the cofounder
link |
00:00:29.040
of the Future of Life Institute, author of two books,
link |
00:00:33.080
both of which I highly recommend.
link |
00:00:35.160
First, Our Mathematical Universe.
link |
00:00:37.260
Second is Life 3.0.
link |
00:00:40.160
He's truly an out of the box thinker and a fun personality,
link |
00:00:44.080
so I really enjoy talking to him.
link |
00:00:45.480
If you'd like to see more of these videos in the future,
link |
00:00:47.980
please subscribe and also click the little bell icon
link |
00:00:50.640
to make sure you don't miss any videos.
link |
00:00:52.720
Also, Twitter, LinkedIn, agi.mit.edu
link |
00:00:56.840
if you wanna watch other lectures
link |
00:00:59.600
or conversations like this one.
link |
00:01:01.080
Better yet, go read Max's book, Life 3.0.
link |
00:01:04.000
Chapter seven on goals is my favorite.
link |
00:01:07.940
It's really where philosophy and engineering come together
link |
00:01:10.480
and it opens with a quote by Dostoevsky.
link |
00:01:14.400
The mystery of human existence lies not in just staying alive
link |
00:01:17.940
but in finding something to live for.
link |
00:01:20.520
Lastly, I believe that every failure rewards us
link |
00:01:23.920
with an opportunity to learn
link |
00:01:26.560
and in that sense, I've been very fortunate
link |
00:01:28.360
to fail in so many new and exciting ways
link |
00:01:31.840
and this conversation was no different.
link |
00:01:34.020
I've learned about something called
link |
00:01:36.160
radio frequency interference, RFI, look it up.
link |
00:01:40.840
Apparently, music and conversations
link |
00:01:42.960
from local radio stations can bleed into the audio
link |
00:01:45.480
that you're recording in such a way
link |
00:01:47.080
that it almost completely ruins that audio.
link |
00:01:49.360
It's an exceptionally difficult sound source to remove.
link |
00:01:53.240
So, I've gotten the opportunity to learn
link |
00:01:55.520
how to avoid RFI in the future during recording sessions.
link |
00:02:00.200
I've also gotten the opportunity to learn
link |
00:02:02.680
how to use Adobe Audition and iZotope RX 6
link |
00:02:06.240
to do some noise, some audio repair.
link |
00:02:11.720
Of course, this is an exceptionally difficult noise
link |
00:02:14.380
to remove.
link |
00:02:15.220
I am an engineer.
link |
00:02:16.280
I'm not an audio engineer.
link |
00:02:18.240
Neither is anybody else in our group
link |
00:02:20.180
but we did our best.
link |
00:02:21.880
Nevertheless, I thank you for your patience
link |
00:02:25.040
and I hope you're still able to enjoy this conversation.
link |
00:02:27.960
Do you think there's intelligent life
link |
00:02:29.320
out there in the universe?
link |
00:02:31.360
Let's open up with an easy question.
link |
00:02:33.480
I have a minority view here actually.
link |
00:02:36.240
When I give public lectures, I often ask for a show of hands
link |
00:02:39.440
who thinks there's intelligent life out there somewhere else
link |
00:02:42.920
and almost everyone put their hands up
link |
00:02:45.440
and when I ask why, they'll be like,
link |
00:02:47.360
oh, there's so many galaxies out there, there's gotta be.
link |
00:02:51.840
But I'm a numbers nerd, right?
link |
00:02:54.560
So when you look more carefully at it,
link |
00:02:56.640
it's not so clear at all.
link |
00:02:59.080
When we talk about our universe, first of all,
link |
00:03:00.680
we don't mean all of space.
link |
00:03:03.040
We actually mean, I don't know,
link |
00:03:04.040
you can throw me the universe if you want,
link |
00:03:05.440
it's behind you there.
link |
00:03:07.280
It's, we simply mean the spherical region of space
link |
00:03:11.440
from which light has a time to reach us so far
link |
00:03:15.360
during the 14.8 billion year,
link |
00:03:17.040
13.8 billion years since our Big Bang.
link |
00:03:19.320
There's more space here but this is what we call a universe
link |
00:03:22.320
because that's all we have access to.
link |
00:03:24.040
So is there intelligent life here
link |
00:03:25.960
that's gotten to the point of building telescopes
link |
00:03:28.920
and computers?
link |
00:03:31.160
My guess is no, actually.
link |
00:03:34.540
The probability of it happening on any given planet
link |
00:03:39.240
is some number we don't know what it is.
link |
00:03:42.620
And what we do know is that the number can't be super high
link |
00:03:48.480
because there's over a billion Earth like planets
link |
00:03:50.300
in the Milky Way galaxy alone,
link |
00:03:52.880
many of which are billions of years older than Earth.
link |
00:03:56.280
And aside from some UFO believers,
link |
00:04:00.600
there isn't much evidence
link |
00:04:01.880
that any superduran civilization has come here at all.
link |
00:04:05.600
And so that's the famous Fermi paradox, right?
link |
00:04:08.440
And then if you work the numbers,
link |
00:04:10.180
what you find is that if you have no clue
link |
00:04:13.440
what the probability is of getting life on a given planet,
link |
00:04:16.880
so it could be 10 to the minus 10, 10 to the minus 20,
link |
00:04:19.680
or 10 to the minus two, or any power of 10
link |
00:04:22.960
is sort of equally likely
link |
00:04:23.800
if you wanna be really open minded,
link |
00:04:25.480
that translates into it being equally likely
link |
00:04:27.600
that our nearest neighbor is 10 to the 16 meters away,
link |
00:04:31.800
10 to the 17 meters away, 10 to the 18.
link |
00:04:35.400
By the time you get much less than 10 to the 16 already,
link |
00:04:41.080
we pretty much know there is nothing else that close.
link |
00:04:45.960
And when you get beyond 10.
link |
00:04:47.280
Because they would have discovered us.
link |
00:04:48.680
Yeah, they would have been discovered as long ago,
link |
00:04:50.360
or if they're really close,
link |
00:04:51.440
we would have probably noted some engineering projects
link |
00:04:53.560
that they're doing.
link |
00:04:54.640
And if it's beyond 10 to the 26 meters,
link |
00:04:57.880
that's already outside of here.
link |
00:05:00.000
So my guess is actually that we are the only life in here
link |
00:05:05.800
that's gotten the point of building advanced tech,
link |
00:05:09.040
which I think is very,
link |
00:05:12.680
puts a lot of responsibility on our shoulders, not screw up.
link |
00:05:15.360
I think people who take for granted
link |
00:05:17.240
that it's okay for us to screw up,
link |
00:05:20.120
have an accidental nuclear war or go extinct somehow
link |
00:05:22.760
because there's a sort of Star Trek like situation out there
link |
00:05:25.960
where some other life forms are gonna come and bail us out
link |
00:05:28.360
and it doesn't matter as much.
link |
00:05:30.400
I think they're leveling us into a false sense of security.
link |
00:05:33.400
I think it's much more prudent to say,
link |
00:05:35.200
let's be really grateful
link |
00:05:36.400
for this amazing opportunity we've had
link |
00:05:38.720
and make the best of it just in case it is down to us.
link |
00:05:44.080
So from a physics perspective,
link |
00:05:45.680
do you think intelligent life,
link |
00:05:48.800
so it's unique from a sort of statistical view
link |
00:05:51.360
of the size of the universe,
link |
00:05:52.560
but from the basic matter of the universe,
link |
00:05:55.840
how difficult is it for intelligent life to come about?
link |
00:05:59.040
The kind of advanced tech building life
link |
00:06:03.120
is implied in your statement that it's really difficult
link |
00:06:05.720
to create something like a human species.
link |
00:06:07.640
Well, I think what we know is that going from no life
link |
00:06:11.560
to having life that can do a level of tech,
link |
00:06:15.720
there's some sort of two going beyond that
link |
00:06:18.720
than actually settling our whole universe with life.
link |
00:06:22.200
There's some major roadblock there,
link |
00:06:26.560
which is some great filter as it's sometimes called,
link |
00:06:30.880
which is tough to get through.
link |
00:06:33.520
It's either that roadblock is either behind us
link |
00:06:37.160
or in front of us.
link |
00:06:38.720
I'm hoping very much that it's behind us.
link |
00:06:41.080
I'm super excited every time we get a new report from NASA
link |
00:06:45.960
saying they failed to find any life on Mars.
link |
00:06:48.480
I'm like, yes, awesome.
link |
00:06:50.080
Because that suggests that the hard part,
link |
00:06:51.680
maybe it was getting the first ribosome
link |
00:06:54.240
or some very low level kind of stepping stone
link |
00:06:59.520
so that we're home free.
link |
00:07:00.400
Because if that's true,
link |
00:07:01.720
then the future is really only limited
link |
00:07:03.640
by our own imagination.
link |
00:07:05.200
It would be much suckier if it turns out
link |
00:07:07.360
that this level of life is kind of a dime a dozen,
link |
00:07:11.440
but maybe there's some other problem.
link |
00:07:12.760
Like as soon as a civilization gets advanced technology,
link |
00:07:16.160
within a hundred years,
link |
00:07:17.000
they get into some stupid fight with themselves and poof.
link |
00:07:20.320
That would be a bummer.
link |
00:07:21.760
Yeah, so you've explored the mysteries of the universe,
link |
00:07:26.160
the cosmological universe, the one that's sitting
link |
00:07:29.000
between us today.
link |
00:07:31.080
I think you've also begun to explore the other universe,
link |
00:07:35.960
which is sort of the mystery,
link |
00:07:38.000
the mysterious universe of the mind of intelligence,
link |
00:07:40.960
of intelligent life.
link |
00:07:42.840
So is there a common thread between your interest
link |
00:07:45.280
or the way you think about space and intelligence?
link |
00:07:48.760
Oh yeah, when I was a teenager,
link |
00:07:53.040
I was already very fascinated by the biggest questions.
link |
00:07:57.280
And I felt that the two biggest mysteries of all in science
link |
00:08:00.560
were our universe out there and our universe in here.
link |
00:08:05.000
So it's quite natural after having spent
link |
00:08:08.120
a quarter of a century on my career,
link |
00:08:11.040
thinking a lot about this one,
link |
00:08:12.680
that I'm now indulging in the luxury
link |
00:08:14.320
of doing research on this one.
link |
00:08:15.960
It's just so cool.
link |
00:08:17.720
I feel the time is ripe now
link |
00:08:20.120
for you trans greatly deepening our understanding of this.
link |
00:08:25.120
Just start exploring this one.
link |
00:08:26.640
Yeah, because I think a lot of people view intelligence
link |
00:08:29.560
as something mysterious that can only exist
link |
00:08:33.520
in biological organisms like us,
link |
00:08:36.120
and therefore dismiss all talk
link |
00:08:37.680
about artificial general intelligence as science fiction.
link |
00:08:41.160
But from my perspective as a physicist,
link |
00:08:43.200
I am a blob of quarks and electrons
link |
00:08:46.680
moving around in a certain pattern
link |
00:08:48.360
and processing information in certain ways.
link |
00:08:50.080
And this is also a blob of quarks and electrons.
link |
00:08:53.600
I'm not smarter than the water bottle
link |
00:08:55.360
because I'm made of different kinds of quarks.
link |
00:08:57.880
I'm made of up quarks and down quarks,
link |
00:08:59.640
exact same kind as this.
link |
00:09:01.400
There's no secret sauce, I think, in me.
link |
00:09:05.080
It's all about the pattern of the information processing.
link |
00:09:08.560
And this means that there's no law of physics
link |
00:09:12.240
saying that we can't create technology,
link |
00:09:15.600
which can help us by being incredibly intelligent
link |
00:09:19.960
and help us crack mysteries that we couldn't.
link |
00:09:21.680
In other words, I think we've really only seen
link |
00:09:23.560
the tip of the intelligence iceberg so far.
link |
00:09:26.480
Yeah, so the perceptronium.
link |
00:09:29.960
Yeah.
link |
00:09:31.280
So you coined this amazing term.
link |
00:09:33.200
It's a hypothetical state of matter,
link |
00:09:35.760
sort of thinking from a physics perspective,
link |
00:09:38.360
what is the kind of matter that can help,
link |
00:09:40.080
as you're saying, subjective experience emerge,
link |
00:09:42.920
consciousness emerge.
link |
00:09:44.280
So how do you think about consciousness
link |
00:09:46.640
from this physics perspective?
link |
00:09:49.960
Very good question.
link |
00:09:50.800
So again, I think many people have underestimated
link |
00:09:55.800
our ability to make progress on this
link |
00:09:59.120
by convincing themselves it's hopeless
link |
00:10:01.320
because somehow we're missing some ingredient that we need.
link |
00:10:05.840
There's some new consciousness particle or whatever.
link |
00:10:09.560
I happen to think that we're not missing anything
link |
00:10:12.720
and that it's not the interesting thing
link |
00:10:16.320
about consciousness that gives us
link |
00:10:18.560
this amazing subjective experience of colors
link |
00:10:21.400
and sounds and emotions.
link |
00:10:23.320
It's rather something at the higher level
link |
00:10:26.320
about the patterns of information processing.
link |
00:10:28.800
And that's why I like to think about this idea
link |
00:10:33.160
of perceptronium.
link |
00:10:34.480
What does it mean for an arbitrary physical system
link |
00:10:36.920
to be conscious in terms of what its particles are doing
link |
00:10:41.920
or its information is doing?
link |
00:10:43.560
I don't think, I hate carbon chauvinism,
link |
00:10:46.080
this attitude you have to be made of carbon atoms
link |
00:10:47.960
to be smart or conscious.
link |
00:10:50.160
There's something about the information processing
link |
00:10:53.520
that this kind of matter performs.
link |
00:10:55.360
Yeah, and you can see I have my favorite equations here
link |
00:10:57.840
describing various fundamental aspects of the world.
link |
00:11:00.720
I feel that I think one day,
link |
00:11:02.560
maybe someone who's watching this will come up
link |
00:11:04.360
with the equations that information processing
link |
00:11:07.280
has to satisfy to be conscious.
link |
00:11:08.760
I'm quite convinced there is big discovery
link |
00:11:11.800
to be made there because let's face it,
link |
00:11:15.400
we know that so many things are made up of information.
link |
00:11:18.720
We know that some information processing is conscious
link |
00:11:21.960
because we are conscious.
link |
00:11:25.520
But we also know that a lot of information processing
link |
00:11:27.600
is not conscious.
link |
00:11:28.440
Like most of the information processing happening
link |
00:11:30.040
in your brain right now is not conscious.
link |
00:11:32.680
There are like 10 megabytes per second coming in
link |
00:11:36.040
even just through your visual system.
link |
00:11:38.080
You're not conscious about your heartbeat regulation
link |
00:11:40.480
or most things.
link |
00:11:42.120
Even if I just ask you to like read what it says here,
link |
00:11:45.680
you look at it and then, oh, now you know what it said.
link |
00:11:48.040
But you're not aware of how the computation actually happened.
link |
00:11:51.560
Your consciousness is like the CEO
link |
00:11:53.680
that got an email at the end with the final answer.
link |
00:11:56.680
So what is it that makes a difference?
link |
00:12:01.000
I think that's both a great science mystery.
link |
00:12:05.120
We're actually studying it a little bit in my lab here
link |
00:12:07.080
at MIT, but I also think it's just a really urgent question
link |
00:12:10.920
to answer.
link |
00:12:12.080
For starters, I mean, if you're an emergency room doctor
link |
00:12:14.880
and you have an unresponsive patient coming in,
link |
00:12:17.160
wouldn't it be great if in addition to having
link |
00:12:22.360
a CT scanner, you had a consciousness scanner
link |
00:12:25.320
that could figure out whether this person
link |
00:12:27.920
is actually having locked in syndrome
link |
00:12:30.960
or is actually comatose.
link |
00:12:33.360
And in the future, imagine if we build robots
link |
00:12:37.000
or the machine that we can have really good conversations
link |
00:12:41.480
with, which I think is very likely to happen.
link |
00:12:44.840
Wouldn't you want to know if your home helper robot
link |
00:12:47.760
is actually experiencing anything or just like a zombie,
link |
00:12:51.320
I mean, would you prefer it?
link |
00:12:53.520
What would you prefer?
link |
00:12:54.360
Would you prefer that it's actually unconscious
link |
00:12:56.200
so that you don't have to feel guilty about switching it off
link |
00:12:58.560
or giving boring chores or what would you prefer?
link |
00:13:02.120
Well, certainly we would prefer,
link |
00:13:06.520
I would prefer the appearance of consciousness.
link |
00:13:08.960
But the question is whether the appearance of consciousness
link |
00:13:11.720
is different than consciousness itself.
link |
00:13:15.040
And sort of to ask that as a question,
link |
00:13:18.200
do you think we need to understand what consciousness is,
link |
00:13:21.760
solve the hard problem of consciousness
link |
00:13:23.520
in order to build something like an AGI system?
link |
00:13:28.240
No, I don't think that.
link |
00:13:30.440
And I think we will probably be able to build things
link |
00:13:34.520
even if we don't answer that question.
link |
00:13:36.080
But if we want to make sure that what happens
link |
00:13:37.720
is a good thing, we better solve it first.
link |
00:13:40.960
So it's a wonderful controversy you're raising there
link |
00:13:44.960
where you have basically three points of view
link |
00:13:47.960
about the hard problem.
link |
00:13:48.800
So there are two different points of view.
link |
00:13:52.800
They both conclude that the hard problem of consciousness
link |
00:13:55.160
is BS.
link |
00:13:56.840
On one hand, you have some people like Daniel Dennett
link |
00:13:59.320
who say that consciousness is just BS
link |
00:14:01.480
because consciousness is the same thing as intelligence.
link |
00:14:05.000
There's no difference.
link |
00:14:06.440
So anything which acts conscious is conscious,
link |
00:14:11.080
just like we are.
link |
00:14:13.480
And then there are also a lot of people,
link |
00:14:15.960
including many top AI researchers I know,
link |
00:14:18.400
who say, oh, consciousness is just bullshit
link |
00:14:19.920
because, of course, machines can never be conscious.
link |
00:14:22.760
They're always going to be zombies.
link |
00:14:24.520
You never have to feel guilty about how you treat them.
link |
00:14:27.880
And then there's a third group of people,
link |
00:14:30.880
including Giulio Tononi, for example,
link |
00:14:34.920
and Krzysztof Koch and a number of others.
link |
00:14:37.440
I would put myself also in this middle camp
link |
00:14:39.520
who say that actually some information processing
link |
00:14:41.880
is conscious and some is not.
link |
00:14:44.160
So let's find the equation which can be used
link |
00:14:46.960
to determine which it is.
link |
00:14:49.080
And I think we've just been a little bit lazy,
link |
00:14:52.040
kind of running away from this problem for a long time.
link |
00:14:54.960
It's been almost taboo to even mention the C word
link |
00:14:57.840
in a lot of circles because,
link |
00:15:00.520
but we should stop making excuses.
link |
00:15:03.520
This is a science question and there are ways
link |
00:15:07.920
we can even test any theory that makes predictions for this.
link |
00:15:11.960
And coming back to this helper robot,
link |
00:15:13.640
I mean, so you said you'd want your helper robot
link |
00:15:16.080
to certainly act conscious and treat you,
link |
00:15:18.160
like have conversations with you and stuff.
link |
00:15:20.880
I think so.
link |
00:15:21.720
But wouldn't you, would you feel,
link |
00:15:22.560
would you feel a little bit creeped out
link |
00:15:23.920
if you realized that it was just a glossed up tape recorder,
link |
00:15:27.680
you know, that was just zombie and was a faking emotion?
link |
00:15:31.560
Would you prefer that it actually had an experience
link |
00:15:34.560
or would you prefer that it's actually
link |
00:15:37.000
not experiencing anything so you feel,
link |
00:15:39.120
you don't have to feel guilty about what you do to it?
link |
00:15:42.200
It's such a difficult question because, you know,
link |
00:15:45.040
it's like when you're in a relationship and you say,
link |
00:15:47.280
well, I love you.
link |
00:15:48.120
And the other person said, I love you back.
link |
00:15:49.760
It's like asking, well, do they really love you back
link |
00:15:52.640
or are they just saying they love you back?
link |
00:15:55.360
Don't you really want them to actually love you?
link |
00:15:58.120
It's hard to, it's hard to really know the difference
link |
00:16:03.520
between everything seeming like there's consciousness
link |
00:16:09.000
present, there's intelligence present,
link |
00:16:10.640
there's affection, passion, love,
link |
00:16:13.840
and it actually being there.
link |
00:16:16.200
I'm not sure, do you have?
link |
00:16:17.720
But like, can I ask you a question about this?
link |
00:16:19.400
Like to make it a bit more pointed.
link |
00:16:20.760
So Mass General Hospital is right across the river, right?
link |
00:16:22.920
Yes.
link |
00:16:23.760
Suppose you're going in for a medical procedure
link |
00:16:26.720
and they're like, you know, for anesthesia,
link |
00:16:29.320
what we're going to do is we're going to give you
link |
00:16:31.000
muscle relaxants so you won't be able to move
link |
00:16:33.160
and you're going to feel excruciating pain
link |
00:16:35.040
during the whole surgery,
link |
00:16:35.880
but you won't be able to do anything about it.
link |
00:16:37.600
But then we're going to give you this drug
link |
00:16:39.200
that erases your memory of it.
link |
00:16:41.960
Would you be cool about that?
link |
00:16:44.960
What's the difference that you're conscious about it
link |
00:16:48.600
or not if there's no behavioral change, right?
link |
00:16:51.640
Right, that's a really, that's a really clear way to put it.
link |
00:16:54.520
That's, yeah, it feels like in that sense,
link |
00:16:57.400
experiencing it is a valuable quality.
link |
00:17:01.080
So actually being able to have subjective experiences,
link |
00:17:05.840
at least in that case, is valuable.
link |
00:17:09.120
And I think we humans have a little bit
link |
00:17:11.240
of a bad track record also of making
link |
00:17:13.600
these self serving arguments
link |
00:17:15.480
that other entities aren't conscious.
link |
00:17:18.040
You know, people often say,
link |
00:17:19.160
oh, these animals can't feel pain.
link |
00:17:21.800
It's okay to boil lobsters because we ask them
link |
00:17:24.040
if it hurt and they didn't say anything.
link |
00:17:25.960
And now there was just a paper out saying,
link |
00:17:27.400
lobsters do feel pain when you boil them
link |
00:17:29.320
and they're banning it in Switzerland.
link |
00:17:31.040
And we did this with slaves too often and said,
link |
00:17:33.560
oh, they don't mind.
link |
00:17:36.240
They don't maybe aren't conscious
link |
00:17:39.480
or women don't have souls or whatever.
link |
00:17:41.160
So I'm a little bit nervous when I hear people
link |
00:17:43.200
just take as an axiom that machines
link |
00:17:46.360
can't have experience ever.
link |
00:17:48.960
I think this is just a really fascinating science question
link |
00:17:51.560
is what it is.
link |
00:17:52.400
Let's research it and try to figure out
link |
00:17:54.720
what it is that makes the difference
link |
00:17:56.000
between unconscious intelligent behavior
link |
00:17:58.880
and conscious intelligent behavior.
link |
00:18:01.120
So in terms of, so if you think of a Boston Dynamics
link |
00:18:04.680
human or robot being sort of with a broom
link |
00:18:07.680
being pushed around, it starts pushing
link |
00:18:11.920
on a consciousness question.
link |
00:18:13.320
So let me ask, do you think an AGI system
link |
00:18:17.040
like a few neuroscientists believe
link |
00:18:19.720
needs to have a physical embodiment?
link |
00:18:22.320
Needs to have a body or something like a body?
link |
00:18:25.720
No, I don't think so.
link |
00:18:28.280
You mean to have a conscious experience?
link |
00:18:30.560
To have consciousness.
link |
00:18:33.160
I do think it helps a lot to have a physical embodiment
link |
00:18:36.080
to learn the kind of things about the world
link |
00:18:38.440
that are important to us humans, for sure.
link |
00:18:42.560
But I don't think the physical embodiment
link |
00:18:45.600
is necessary after you've learned it
link |
00:18:47.120
to just have the experience.
link |
00:18:48.760
Think about when you're dreaming, right?
link |
00:18:51.400
Your eyes are closed.
link |
00:18:52.600
You're not getting any sensory input.
link |
00:18:54.240
You're not behaving or moving in any way
link |
00:18:55.960
but there's still an experience there, right?
link |
00:18:59.720
And so clearly the experience that you have
link |
00:19:01.400
when you see something cool in your dreams
link |
00:19:03.320
isn't coming from your eyes.
link |
00:19:04.800
It's just the information processing itself in your brain
link |
00:19:08.640
which is that experience, right?
link |
00:19:10.920
But if I put it another way, I'll say
link |
00:19:13.640
because it comes from neuroscience
link |
00:19:15.120
is the reason you want to have a body and a physical
link |
00:19:18.280
something like a physical, you know, a physical system
link |
00:19:23.920
is because you want to be able to preserve something.
link |
00:19:27.040
In order to have a self, you could argue,
link |
00:19:30.840
would you need to have some kind of embodiment of self
link |
00:19:36.400
to want to preserve?
link |
00:19:38.920
Well, now we're getting a little bit anthropomorphic
link |
00:19:42.400
into anthropomorphizing things.
link |
00:19:45.200
Maybe talking about self preservation instincts.
link |
00:19:47.280
I mean, we are evolved organisms, right?
link |
00:19:50.560
So Darwinian evolution endowed us
link |
00:19:53.520
and other evolved organism with a self preservation instinct
link |
00:19:57.120
because those that didn't have those self preservation genes
link |
00:20:00.560
got cleaned out of the gene pool, right?
link |
00:20:02.960
But if you build an artificial general intelligence
link |
00:20:06.880
the mind space that you can design is much, much larger
link |
00:20:10.040
than just a specific subset of minds that can evolve.
link |
00:20:14.440
So an AGI mind doesn't necessarily have
link |
00:20:17.280
to have any self preservation instinct.
link |
00:20:19.880
It also doesn't necessarily have to be
link |
00:20:21.600
so individualistic as us.
link |
00:20:24.040
Like, imagine if you could just, first of all,
link |
00:20:26.080
or we are also very afraid of death.
link |
00:20:27.960
You know, I suppose you could back yourself up
link |
00:20:29.920
every five minutes and then your airplane
link |
00:20:32.000
is about to crash.
link |
00:20:32.840
You're like, shucks, I'm gonna lose the last five minutes
link |
00:20:36.680
of experiences since my last cloud backup, dang.
link |
00:20:39.520
You know, it's not as big a deal.
link |
00:20:41.520
Or if we could just copy experiences between our minds
link |
00:20:45.680
easily like we, which we could easily do
link |
00:20:47.640
if we were silicon based, right?
link |
00:20:50.360
Then maybe we would feel a little bit more
link |
00:20:54.040
like a hive mind actually, that maybe it's the,
link |
00:20:56.560
so I don't think we should take for granted at all
link |
00:20:59.960
that AGI will have to have any of those sort of
link |
00:21:04.880
competitive as alpha male instincts.
link |
00:21:07.360
On the other hand, you know, this is really interesting
link |
00:21:10.160
because I think some people go too far and say,
link |
00:21:13.840
of course we don't have to have any concerns either
link |
00:21:16.680
that advanced AI will have those instincts
link |
00:21:20.800
because we can build anything we want.
link |
00:21:22.680
That there's a very nice set of arguments going back
link |
00:21:26.280
to Steve Omohundro and Nick Bostrom and others
link |
00:21:28.560
just pointing out that when we build machines,
link |
00:21:32.280
we normally build them with some kind of goal, you know,
link |
00:21:34.680
win this chess game, drive this car safely or whatever.
link |
00:21:38.520
And as soon as you put in a goal into machine,
link |
00:21:40.960
especially if it's kind of open ended goal
link |
00:21:42.760
and the machine is very intelligent,
link |
00:21:44.640
it'll break that down into a bunch of sub goals.
link |
00:21:48.280
And one of those goals will almost always
link |
00:21:51.280
be self preservation because if it breaks or dies
link |
00:21:54.200
in the process, it's not gonna accomplish the goal, right?
link |
00:21:56.120
Like suppose you just build a little,
link |
00:21:58.040
you have a little robot and you tell it to go down
link |
00:22:01.000
the store market here and get you some food,
link |
00:22:04.040
make you cook an Italian dinner, you know,
link |
00:22:06.200
and then someone mugs it and tries to break it
link |
00:22:08.400
on the way.
link |
00:22:09.480
That robot has an incentive to not get destroyed
link |
00:22:12.920
and defend itself or run away,
link |
00:22:14.720
because otherwise it's gonna fail in cooking your dinner.
link |
00:22:17.720
It's not afraid of death,
link |
00:22:19.560
but it really wants to complete the dinner cooking goal.
link |
00:22:22.960
So it will have a self preservation instinct.
link |
00:22:25.040
Continue being a functional agent somehow.
link |
00:22:27.920
And similarly, if you give any kind of more ambitious goal
link |
00:22:33.720
to an AGI, it's very likely they wanna acquire
link |
00:22:37.000
more resources so it can do that better.
link |
00:22:39.840
And it's exactly from those sort of sub goals
link |
00:22:42.720
that we might not have intended
link |
00:22:43.800
that some of the concerns about AGI safety come.
link |
00:22:47.160
You give it some goal that seems completely harmless.
link |
00:22:50.600
And then before you realize it,
link |
00:22:53.360
it's also trying to do these other things
link |
00:22:55.480
which you didn't want it to do.
link |
00:22:56.920
And it's maybe smarter than us.
link |
00:22:59.160
So it's fascinating.
link |
00:23:01.000
And let me pause just because I am in a very kind
link |
00:23:05.680
of human centric way, see fear of death
link |
00:23:08.720
as a valuable motivator.
link |
00:23:11.840
So you don't think, you think that's an artifact
link |
00:23:16.440
of evolution, so that's the kind of mind space
link |
00:23:19.120
evolution created that we're sort of almost obsessed
link |
00:23:22.120
about self preservation, some kind of genetic flow.
link |
00:23:24.400
You don't think that's necessary to be afraid of death.
link |
00:23:29.480
So not just a kind of sub goal of self preservation
link |
00:23:32.920
just so you can keep doing the thing,
link |
00:23:34.920
but more fundamentally sort of have the finite thing
link |
00:23:38.720
like this ends for you at some point.
link |
00:23:43.080
Interesting.
link |
00:23:44.160
Do I think it's necessary for what precisely?
link |
00:23:47.440
For intelligence, but also for consciousness.
link |
00:23:50.920
So for those, for both, do you think really
link |
00:23:55.040
like a finite death and the fear of it is important?
link |
00:23:59.120
So before I can answer, before we can agree
link |
00:24:05.160
on whether it's necessary for intelligence
link |
00:24:06.960
or for consciousness, we should be clear
link |
00:24:08.360
on how we define those two words.
link |
00:24:09.800
Cause a lot of really smart people define them
link |
00:24:11.960
in very different ways.
link |
00:24:13.320
I was on this panel with AI experts
link |
00:24:17.080
and they couldn't agree on how to define intelligence even.
link |
00:24:20.080
So I define intelligence simply
link |
00:24:22.000
as the ability to accomplish complex goals.
link |
00:24:25.640
I like your broad definition, because again
link |
00:24:27.280
I don't want to be a carbon chauvinist.
link |
00:24:29.040
Right.
link |
00:24:30.400
And in that case, no, certainly
link |
00:24:34.600
it doesn't require fear of death.
link |
00:24:36.480
I would say alpha go, alpha zero is quite intelligent.
link |
00:24:40.120
I don't think alpha zero has any fear of being turned off
link |
00:24:43.080
because it doesn't understand the concept of it even.
link |
00:24:46.320
And similarly consciousness.
link |
00:24:48.440
I mean, you could certainly imagine very simple
link |
00:24:52.240
kind of experience.
link |
00:24:53.920
If certain plants have any kind of experience
link |
00:24:57.200
I don't think they're very afraid of dying
link |
00:24:58.560
or there's nothing they can do about it anyway much.
link |
00:25:00.920
So there wasn't that much value in, but more seriously
link |
00:25:04.560
I think if you ask, not just about being conscious
link |
00:25:09.200
but maybe having what you would, we might call
link |
00:25:14.320
an exciting life where you feel passion
link |
00:25:16.400
and really appreciate the things.
link |
00:25:21.480
Maybe there somehow, maybe there perhaps it does help
link |
00:25:24.440
having a backdrop that, Hey, it's finite.
link |
00:25:27.880
No, let's make the most of this, let's live to the fullest.
link |
00:25:31.200
So if you knew you were going to live forever
link |
00:25:34.880
do you think you would change your?
link |
00:25:37.400
Yeah, I mean, in some perspective
link |
00:25:39.560
it would be an incredibly boring life living forever.
link |
00:25:43.960
So in the sort of loose subjective terms that you said
link |
00:25:47.360
of something exciting and something in this
link |
00:25:50.480
that other humans would understand, I think is, yeah
link |
00:25:53.240
it seems that the finiteness of it is important.
link |
00:25:57.120
Well, the good news I have for you then is
link |
00:25:59.560
based on what we understand about cosmology
link |
00:26:02.120
everything is in our universe is probably
link |
00:26:05.120
ultimately probably finite, although.
link |
00:26:07.960
Big crunch or big, what's the, the infinite expansion.
link |
00:26:11.560
Yeah, we could have a big chill or a big crunch
link |
00:26:13.840
or a big rip or that's the big snap or death bubbles.
link |
00:26:18.440
All of them are more than a billion years away.
link |
00:26:20.040
So we should, we certainly have vastly more time
link |
00:26:24.600
than our ancestors thought, but there is still
link |
00:26:29.160
it's still pretty hard to squeeze in an infinite number
link |
00:26:32.360
of compute cycles, even though there are some loopholes
link |
00:26:36.560
that just might be possible.
link |
00:26:37.720
But I think, you know, some people like to say
link |
00:26:41.960
that you should live as if you're about to
link |
00:26:44.760
you're going to die in five years or so.
link |
00:26:46.720
And that's sort of optimal.
link |
00:26:47.960
Maybe it's a good assumption.
link |
00:26:50.560
We should build our civilization as if it's all finite
link |
00:26:54.680
to be on the safe side.
link |
00:26:55.680
Right, exactly.
link |
00:26:56.960
So you mentioned defining intelligence
link |
00:26:59.720
as the ability to solve complex goals.
link |
00:27:02.960
Where would you draw a line or how would you try
link |
00:27:05.440
to define human level intelligence
link |
00:27:08.200
and superhuman level intelligence?
link |
00:27:10.680
Where is consciousness part of that definition?
link |
00:27:13.280
No, consciousness does not come into this definition.
link |
00:27:16.640
So, so I think of intelligence as it's a spectrum
link |
00:27:20.280
but there are very many different kinds of goals
link |
00:27:21.960
you can have.
link |
00:27:22.800
You can have a goal to be a good chess player
link |
00:27:24.000
a good goal player, a good car driver, a good investor
link |
00:27:28.520
good poet, et cetera.
link |
00:27:31.160
So intelligence that by its very nature
link |
00:27:34.320
isn't something you can measure by this one number
link |
00:27:36.680
or some overall goodness.
link |
00:27:37.960
No, no.
link |
00:27:38.800
There are some people who are more better at this.
link |
00:27:40.320
Some people are better than that.
link |
00:27:42.360
Right now we have machines that are much better than us
link |
00:27:45.440
at some very narrow tasks like multiplying large numbers
link |
00:27:49.040
fast, memorizing large databases, playing chess
link |
00:27:53.200
playing go and soon driving cars.
link |
00:27:57.480
But there's still no machine that can match
link |
00:28:00.080
a human child in general intelligence
link |
00:28:02.720
but artificial general intelligence, AGI
link |
00:28:05.720
the name of your course, of course
link |
00:28:07.880
that is by its very definition, the quest
link |
00:28:13.400
to build a machine that can do everything
link |
00:28:16.000
as well as we can.
link |
00:28:17.800
So the old Holy grail of AI from back to its inception
link |
00:28:21.960
in the sixties, if that ever happens, of course
link |
00:28:25.560
I think it's going to be the biggest transition
link |
00:28:27.320
in the history of life on earth
link |
00:28:29.040
but it doesn't necessarily have to wait the big impact
link |
00:28:33.200
until machines are better than us at knitting
link |
00:28:35.400
that the really big change doesn't come exactly
link |
00:28:39.160
at the moment they're better than us at everything.
link |
00:28:41.800
The really big change comes first
link |
00:28:44.120
there are big changes when they start becoming better
link |
00:28:45.840
at us at doing most of the jobs that we do
link |
00:28:48.800
because that takes away much of the demand
link |
00:28:51.160
for human labor.
link |
00:28:53.200
And then the really whopping change comes
link |
00:28:55.640
when they become better than us at AI research, right?
link |
00:29:01.040
Because right now the timescale of AI research
link |
00:29:03.760
is limited by the human research and development cycle
link |
00:29:08.400
of years typically, you know
link |
00:29:10.160
how long does it take from one release of some software
link |
00:29:13.480
or iPhone or whatever to the next?
link |
00:29:15.720
But once Google can replace 40,000 engineers
link |
00:29:20.920
by 40,000 equivalent pieces of software or whatever
link |
00:29:26.400
but then there's no reason that has to be years
link |
00:29:29.680
it can be in principle much faster
link |
00:29:31.840
and the timescale of future progress in AI
link |
00:29:36.040
and all of science and technology will be driven
link |
00:29:39.320
by machines, not humans.
link |
00:29:40.960
So it's this simple point which gives right
link |
00:29:46.520
this incredibly fun controversy
link |
00:29:48.720
about whether there can be intelligence explosion
link |
00:29:51.880
so called singularity as Werner Vinge called it.
link |
00:29:54.400
Now the idea is articulated by I.J. Good
link |
00:29:57.040
is obviously way back fifties
link |
00:29:59.480
but you can see Alan Turing
link |
00:30:01.040
and others thought about it even earlier.
link |
00:30:06.920
So you asked me what exactly would I define
link |
00:30:10.080
human level intelligence, yeah.
link |
00:30:12.800
So the glib answer is to say something
link |
00:30:15.680
which is better than us at all cognitive tasks
link |
00:30:18.520
with a better than any human at all cognitive tasks
link |
00:30:21.800
but the really interesting bar
link |
00:30:23.080
I think goes a little bit lower than that actually.
link |
00:30:25.760
It's when they can, when they're better than us
link |
00:30:27.920
at AI programming and general learning
link |
00:30:31.760
so that they can if they want to get better
link |
00:30:35.360
than us at anything by just studying.
link |
00:30:37.240
So they're better is a key word and better is towards
link |
00:30:40.560
this kind of spectrum of the complexity of goals
link |
00:30:44.120
it's able to accomplish.
link |
00:30:45.680
So another way to, and that's certainly
link |
00:30:50.360
a very clear definition of human love.
link |
00:30:53.040
So there's, it's almost like a sea that's rising
link |
00:30:55.240
you can do more and more and more things
link |
00:30:56.800
it's a geographic that you show
link |
00:30:58.640
it's really nice way to put it.
link |
00:30:59.880
So there's some peaks that
link |
00:31:01.560
and there's an ocean level elevating
link |
00:31:03.280
and you solve more and more problems
link |
00:31:04.800
but just kind of to take a pause
link |
00:31:07.720
and we took a bunch of questions
link |
00:31:09.000
and a lot of social networks
link |
00:31:10.240
and a bunch of people asked
link |
00:31:11.720
a sort of a slightly different direction
link |
00:31:14.480
on creativity and things that perhaps aren't a peak.
link |
00:31:23.560
Human beings are flawed
link |
00:31:24.720
and perhaps better means having contradiction
link |
00:31:28.720
being flawed in some way.
link |
00:31:30.200
So let me sort of start easy, first of all.
link |
00:31:34.960
So you have a lot of cool equations.
link |
00:31:36.600
Let me ask, what's your favorite equation, first of all?
link |
00:31:39.760
I know they're all like your children, but like
link |
00:31:42.760
which one is that?
link |
00:31:43.680
This is the shirt in your equation.
link |
00:31:45.560
It's the master key of quantum mechanics
link |
00:31:48.640
of the micro world.
link |
00:31:49.880
So this equation will protect everything
link |
00:31:52.800
to do with atoms, molecules and all the way up.
link |
00:31:55.840
Right?
link |
00:31:58.560
Yeah, so, okay.
link |
00:31:59.760
So quantum mechanics is certainly a beautiful
link |
00:32:02.080
mysterious formulation of our world.
link |
00:32:05.160
So I'd like to sort of ask you, just as an example
link |
00:32:08.760
it perhaps doesn't have the same beauty as physics does
link |
00:32:12.160
but in mathematics abstract, the Andrew Wiles
link |
00:32:16.960
who proved the Fermat's last theorem.
link |
00:32:19.360
So he just saw this recently
link |
00:32:22.040
and it kind of caught my eye a little bit.
link |
00:32:24.160
This is 358 years after it was conjectured.
link |
00:32:27.960
So this is very simple formulation.
link |
00:32:29.960
Everybody tried to prove it, everybody failed.
link |
00:32:32.640
And so here's this guy comes along
link |
00:32:34.800
and eventually proves it and then fails to prove it
link |
00:32:38.640
and then proves it again in 94.
link |
00:32:41.320
And he said like the moment when everything connected
link |
00:32:43.480
into place in an interview said
link |
00:32:46.040
it was so indescribably beautiful.
link |
00:32:47.880
That moment when you finally realize the connecting piece
link |
00:32:51.040
of two conjectures.
link |
00:32:52.800
He said, it was so indescribably beautiful.
link |
00:32:55.280
It was so simple and so elegant.
link |
00:32:57.040
I couldn't understand how I'd missed it.
link |
00:32:58.760
And I just stared at it in disbelief for 20 minutes.
link |
00:33:02.080
Then during the day, I walked around the department
link |
00:33:05.240
and I keep coming back to my desk
link |
00:33:07.880
looking to see if it was still there.
link |
00:33:09.840
It was still there.
link |
00:33:10.680
I couldn't contain myself.
link |
00:33:11.760
I was so excited.
link |
00:33:12.880
It was the most important moment on my working life.
link |
00:33:15.880
Nothing I ever do again will mean as much.
link |
00:33:18.960
So that particular moment.
link |
00:33:20.800
And it kind of made me think of what would it take?
link |
00:33:24.640
And I think we have all been there at small levels.
link |
00:33:29.480
Maybe let me ask, have you had a moment like that
link |
00:33:32.240
in your life where you just had an idea?
link |
00:33:34.880
It's like, wow, yes.
link |
00:33:40.000
I wouldn't mention myself in the same breath
link |
00:33:42.480
as Andrew Wiles, but I've certainly had a number
link |
00:33:44.760
of aha moments when I realized something very cool
link |
00:33:52.200
about physics, which has completely made my head explode.
link |
00:33:56.000
In fact, some of my favorite discoveries I made later,
link |
00:33:58.320
I later realized that they had been discovered earlier
link |
00:34:01.080
by someone who sometimes got quite famous for it.
link |
00:34:03.240
So it's too late for me to even publish it,
link |
00:34:05.480
but that doesn't diminish in any way.
link |
00:34:07.440
The emotional experience you have when you realize it,
link |
00:34:09.760
like, wow.
link |
00:34:11.320
Yeah, so what would it take in that moment, that wow,
link |
00:34:15.520
that was yours in that moment?
link |
00:34:17.320
So what do you think it takes for an intelligence system,
link |
00:34:21.440
an AGI system, an AI system to have a moment like that?
link |
00:34:25.640
That's a tricky question
link |
00:34:26.760
because there are actually two parts to it, right?
link |
00:34:29.200
One of them is, can it accomplish that proof?
link |
00:34:33.920
Can it prove that you can never write A to the N
link |
00:34:37.640
plus B to the N equals three to that equal Z to the N
link |
00:34:42.760
for all integers, et cetera, et cetera,
link |
00:34:45.320
when N is bigger than two?
link |
00:34:48.720
That's simply a question about intelligence.
link |
00:34:51.360
Can you build machines that are that intelligent?
link |
00:34:54.120
And I think by the time we get a machine
link |
00:34:57.280
that can independently come up with that level of proofs,
link |
00:35:00.840
probably quite close to AGI.
link |
00:35:03.360
The second question is a question about consciousness.
link |
00:35:07.240
When will we, how likely is it that such a machine
link |
00:35:11.760
will actually have any experience at all,
link |
00:35:14.240
as opposed to just being like a zombie?
link |
00:35:16.160
And would we expect it to have some sort of emotional response
link |
00:35:20.560
to this or anything at all akin to human emotion
link |
00:35:24.640
where when it accomplishes its machine goal,
link |
00:35:28.320
it views it as somehow something very positive
link |
00:35:31.920
and sublime and deeply meaningful?
link |
00:35:39.160
I would certainly hope that if in the future
link |
00:35:41.440
we do create machines that are our peers
link |
00:35:45.120
or even our descendants, that I would certainly
link |
00:35:50.160
hope that they do have this sublime appreciation of life.
link |
00:35:55.480
In a way, my absolutely worst nightmare
link |
00:35:58.840
would be that at some point in the future,
link |
00:36:05.760
the distant future, maybe our cosmos
link |
00:36:07.400
is teeming with all this post biological life doing
link |
00:36:10.600
all the seemingly cool stuff.
link |
00:36:12.880
And maybe the last humans, by the time
link |
00:36:16.480
our species eventually fizzles out,
link |
00:36:20.120
will be like, well, that's OK because we're
link |
00:36:21.920
so proud of our descendants here.
link |
00:36:23.600
And look what all the, my worst nightmare
link |
00:36:26.680
is that we haven't solved the consciousness problem.
link |
00:36:30.360
And we haven't realized that these are all the zombies.
link |
00:36:32.880
They're not aware of anything any more than a tape recorder
link |
00:36:36.200
has any kind of experience.
link |
00:36:37.840
So the whole thing has just become
link |
00:36:40.040
a play for empty benches.
link |
00:36:41.520
That would be the ultimate zombie apocalypse.
link |
00:36:44.640
So I would much rather, in that case,
link |
00:36:47.200
that we have these beings which can really
link |
00:36:52.240
appreciate how amazing it is.
link |
00:36:57.000
And in that picture, what would be the role of creativity?
link |
00:37:01.080
A few people ask about creativity.
link |
00:37:04.960
When you think about intelligence,
link |
00:37:07.080
certainly the story you told at the beginning of your book
link |
00:37:09.840
involved creating movies and so on, making money.
link |
00:37:15.200
You can make a lot of money in our modern world
link |
00:37:17.240
with music and movies.
link |
00:37:18.600
So if you are an intelligent system,
link |
00:37:20.880
you may want to get good at that.
link |
00:37:22.960
But that's not necessarily what I mean by creativity.
link |
00:37:26.280
Is it important on that complex goals
link |
00:37:29.640
where the sea is rising for there
link |
00:37:31.600
to be something creative?
link |
00:37:33.800
Or am I being very human centric and thinking creativity
link |
00:37:37.400
somehow special relative to intelligence?
link |
00:37:41.880
My hunch is that we should think of creativity simply
link |
00:37:47.240
as an aspect of intelligence.
link |
00:37:50.760
And we have to be very careful with human vanity.
link |
00:37:57.840
We have this tendency to very often want
link |
00:37:59.520
to say, as soon as machines can do something,
link |
00:38:01.560
we try to diminish it and say, oh, but that's
link |
00:38:03.560
not real intelligence.
link |
00:38:05.920
Isn't it creative or this or that?
link |
00:38:08.400
The other thing, if we ask ourselves
link |
00:38:12.200
to write down a definition of what we actually mean
link |
00:38:14.320
by being creative, what we mean by Andrew Wiles, what he did
link |
00:38:18.840
there, for example, don't we often mean that someone takes
link |
00:38:21.880
a very unexpected leap?
link |
00:38:26.000
It's not like taking 573 and multiplying it
link |
00:38:29.680
by 224 by just a step of straightforward cookbook
link |
00:38:33.840
like rules, right?
link |
00:38:36.520
You can maybe make a connection between two things
link |
00:38:39.680
that people had never thought was connected or something
link |
00:38:42.640
like that.
link |
00:38:44.480
I think this is an aspect of intelligence.
link |
00:38:47.720
And this is actually one of the most important aspects of it.
link |
00:38:53.000
Maybe the reason we humans tend to be better at it
link |
00:38:55.520
than traditional computers is because it's
link |
00:38:57.840
something that comes more naturally if you're
link |
00:38:59.640
a neural network than if you're a traditional logic gate
link |
00:39:04.120
based computer machine.
link |
00:39:05.720
We physically have all these connections.
link |
00:39:08.640
And you activate here, activate here, activate here.
link |
00:39:13.800
Bing.
link |
00:39:16.560
My hunch is that if we ever build a machine where you could
link |
00:39:21.040
just give it the task, hey, you say, hey, I just realized
link |
00:39:29.200
I want to travel around the world instead this month.
link |
00:39:32.320
Can you teach my AGI course for me?
link |
00:39:34.600
And it's like, OK, I'll do it.
link |
00:39:35.960
And it does everything that you would have done
link |
00:39:37.920
and improvises and stuff.
link |
00:39:39.760
That would, in my mind, involve a lot of creativity.
link |
00:39:43.360
Yeah, so it's actually a beautiful way to put it.
link |
00:39:45.680
I think we do try to grasp at the definition of intelligence
link |
00:39:52.640
is everything we don't understand how to build.
link |
00:39:56.360
So we as humans try to find things
link |
00:39:59.360
that we have and machines don't have.
link |
00:40:01.240
And maybe creativity is just one of the things, one
link |
00:40:03.800
of the words we use to describe that.
link |
00:40:05.480
That's a really interesting way to put it.
link |
00:40:07.200
I don't think we need to be that defensive.
link |
00:40:09.520
I don't think anything good comes out of saying,
link |
00:40:11.560
well, we're somehow special, you know?
link |
00:40:18.080
Contrary wise, there are many examples in history
link |
00:40:21.040
of where trying to pretend that we're somehow superior
link |
00:40:27.840
to all other intelligent beings has led to pretty bad results,
link |
00:40:33.120
right?
link |
00:40:35.960
Nazi Germany, they said that they were somehow superior
link |
00:40:38.440
to other people.
link |
00:40:40.080
Today, we still do a lot of cruelty to animals
link |
00:40:42.440
by saying that we're so superior somehow,
link |
00:40:44.440
and they can't feel pain.
link |
00:40:46.440
Slavery was justified by the same kind
link |
00:40:48.480
of just really weak arguments.
link |
00:40:52.200
And I don't think if we actually go ahead and build
link |
00:40:57.120
artificial general intelligence, it
link |
00:40:59.440
can do things better than us, I don't
link |
00:41:01.360
think we should try to found our self worth on some sort
link |
00:41:04.080
of bogus claims of superiority in terms
link |
00:41:09.760
of our intelligence.
link |
00:41:12.120
I think we should instead find our calling
link |
00:41:18.080
and the meaning of life from the experiences that we have.
link |
00:41:23.360
I can have very meaningful experiences
link |
00:41:28.720
even if there are other people who are smarter than me.
link |
00:41:32.920
When I go to a faculty meeting here,
link |
00:41:34.400
and we talk about something, and then I certainly realize,
link |
00:41:36.520
oh, boy, he has an old prize, he has an old prize,
link |
00:41:39.080
he has an old prize, I don't have one.
link |
00:41:40.800
Does that make me enjoy life any less
link |
00:41:43.760
or enjoy talking to those people less?
link |
00:41:47.560
Of course not.
link |
00:41:49.560
And the contrary, I feel very honored and privileged
link |
00:41:54.160
to get to interact with other very intelligent beings that
link |
00:41:58.760
are better than me at a lot of stuff.
link |
00:42:00.680
So I don't think there's any reason why
link |
00:42:02.840
we can't have the same approach with intelligent machines.
link |
00:42:06.080
That's a really interesting.
link |
00:42:07.320
So people don't often think about that.
link |
00:42:08.920
They think about when there's going,
link |
00:42:10.600
if there's machines that are more intelligent,
link |
00:42:13.320
you naturally think that that's not
link |
00:42:15.080
going to be a beneficial type of intelligence.
link |
00:42:19.080
You don't realize it could be like peers with Nobel prizes
link |
00:42:23.000
that would be just fun to talk with,
link |
00:42:25.120
and they might be clever about certain topics,
link |
00:42:27.560
and you can have fun having a few drinks with them.
link |
00:42:32.240
Well, also, another example we can all
link |
00:42:35.880
relate to of why it doesn't have to be a terrible thing
link |
00:42:39.320
to be in the presence of people who are even smarter than us
link |
00:42:42.560
all around is when you and I were both two years old,
link |
00:42:45.600
I mean, our parents were much more intelligent than us,
link |
00:42:48.360
right?
link |
00:42:49.040
Worked out OK, because their goals
link |
00:42:51.960
were aligned with our goals.
link |
00:42:53.960
And that, I think, is really the number one key issue
link |
00:42:58.680
we have to solve if we value align the value alignment
link |
00:43:02.280
problem, exactly.
link |
00:43:03.080
Because people who see too many Hollywood movies
link |
00:43:06.520
with lousy science fiction plot lines,
link |
00:43:10.000
they worry about the wrong thing, right?
link |
00:43:12.200
They worry about some machine suddenly turning evil.
link |
00:43:16.320
It's not malice that is the concern.
link |
00:43:21.480
It's competence.
link |
00:43:22.880
By definition, intelligent makes you very competent.
link |
00:43:27.440
If you have a more intelligent goal playing,
link |
00:43:31.920
computer playing is a less intelligent one.
link |
00:43:33.680
And when we define intelligence as the ability
link |
00:43:36.120
to accomplish goal winning, it's going
link |
00:43:38.600
to be the more intelligent one that wins.
link |
00:43:40.560
And if you have a human and then you
link |
00:43:43.560
have an AGI that's more intelligent in all ways
link |
00:43:47.720
and they have different goals, guess who's
link |
00:43:49.520
going to get their way, right?
link |
00:43:50.720
So I was just reading about this particular rhinoceros species
link |
00:43:57.120
that was driven extinct just a few years ago.
link |
00:43:59.200
Ellen Bummer is looking at this cute picture of a mommy
link |
00:44:02.280
rhinoceros with its child.
link |
00:44:05.080
And why did we humans drive it to extinction?
link |
00:44:09.320
It wasn't because we were evil rhino haters as a whole.
link |
00:44:12.800
It was just because our goals weren't aligned
link |
00:44:14.920
with those of the rhinoceros.
link |
00:44:16.000
And it didn't work out so well for the rhinoceros
link |
00:44:17.680
because we were more intelligent, right?
link |
00:44:19.560
So I think it's just so important
link |
00:44:21.240
that if we ever do build AGI, before we unleash anything,
link |
00:44:27.120
we have to make sure that it learns
link |
00:44:31.840
to understand our goals, that it adopts our goals,
link |
00:44:36.000
and that it retains those goals.
link |
00:44:37.920
So the cool, interesting problem there
link |
00:44:40.520
is us as human beings trying to formulate our values.
link |
00:44:47.040
So you could think of the United States Constitution as a way
link |
00:44:51.360
that people sat down, at the time a bunch of white men,
link |
00:44:56.680
which is a good example, I should say.
link |
00:44:59.680
They formulated the goals for this country.
link |
00:45:01.480
And a lot of people agree that those goals actually
link |
00:45:03.760
held up pretty well.
link |
00:45:05.360
That's an interesting formulation of values
link |
00:45:07.160
and failed miserably in other ways.
link |
00:45:09.440
So for the value alignment problem and the solution to it,
link |
00:45:13.320
we have to be able to put on paper or in a program
link |
00:45:19.560
human values.
link |
00:45:20.400
How difficult do you think that is?
link |
00:45:22.400
Very.
link |
00:45:24.040
But it's so important.
link |
00:45:25.880
We really have to give it our best.
link |
00:45:28.000
And it's difficult for two separate reasons.
link |
00:45:30.120
There's the technical value alignment problem
link |
00:45:33.440
of figuring out just how to make machines understand our goals,
link |
00:45:39.120
adopt them, and retain them.
link |
00:45:40.440
And then there's the separate part of it,
link |
00:45:43.200
the philosophical part.
link |
00:45:44.200
Whose values anyway?
link |
00:45:45.920
And since it's not like we have any great consensus
link |
00:45:48.320
on this planet on values, what mechanism should we
link |
00:45:52.040
create then to aggregate and decide, OK,
link |
00:45:54.120
what's a good compromise?
link |
00:45:56.520
That second discussion can't just
link |
00:45:58.440
be left to tech nerds like myself.
link |
00:46:01.560
And if we refuse to talk about it and then AGI gets built,
link |
00:46:05.720
who's going to be actually making
link |
00:46:07.160
the decision about whose values?
link |
00:46:08.480
It's going to be a bunch of dudes in some tech company.
link |
00:46:12.080
And are they necessarily so representative of all
link |
00:46:17.240
of humankind that we want to just entrust it to them?
link |
00:46:19.400
Are they even uniquely qualified to speak
link |
00:46:23.000
to future human happiness just because they're
link |
00:46:25.240
good at programming AI?
link |
00:46:26.480
I'd much rather have this be a really inclusive conversation.
link |
00:46:30.200
But do you think it's possible?
link |
00:46:32.560
So you create a beautiful vision that includes the diversity,
link |
00:46:37.560
cultural diversity, and various perspectives on discussing
link |
00:46:40.960
rights, freedoms, human dignity.
link |
00:46:43.600
But how hard is it to come to that consensus?
link |
00:46:46.520
Do you think it's certainly a really important thing
link |
00:46:50.400
that we should all try to do?
link |
00:46:51.880
But do you think it's feasible?
link |
00:46:54.240
I think there's no better way to guarantee failure than to
link |
00:47:00.160
refuse to talk about it or refuse to try.
link |
00:47:02.840
And I also think it's a really bad strategy
link |
00:47:05.320
to say, OK, let's first have a discussion for a long time.
link |
00:47:08.560
And then once we reach complete consensus,
link |
00:47:11.040
then we'll try to load it into some machine.
link |
00:47:13.360
No, we shouldn't let perfect be the enemy of good.
link |
00:47:16.560
Instead, we should start with the kindergarten ethics
link |
00:47:20.600
that pretty much everybody agrees on
link |
00:47:22.120
and put that into machines now.
link |
00:47:24.360
We're not doing that even.
link |
00:47:25.880
Look at anyone who builds this passenger aircraft,
link |
00:47:31.000
wants it to never under any circumstances
link |
00:47:33.000
fly into a building or a mountain.
link |
00:47:35.600
Yet the September 11 hijackers were able to do that.
link |
00:47:38.480
And even more embarrassingly, Andreas Lubitz,
link |
00:47:41.800
this depressed Germanwings pilot,
link |
00:47:43.960
when he flew his passenger jet into the Alps killing over 100
link |
00:47:47.360
people, he just told the autopilot to do it.
link |
00:47:50.640
He told the freaking computer to change the altitude
link |
00:47:53.200
to 100 meters.
link |
00:47:55.040
And even though it had the GPS maps, everything,
link |
00:47:58.160
the computer was like, OK.
link |
00:48:00.640
So we should take those very basic values,
link |
00:48:05.320
where the problem is not that we don't agree.
link |
00:48:08.400
The problem is just we've been too lazy
link |
00:48:10.120
to try to put it into our machines
link |
00:48:11.480
and make sure that from now on, airplanes will just,
link |
00:48:15.520
which all have computers in them,
link |
00:48:16.920
but will just refuse to do something like that.
link |
00:48:19.720
Go into safe mode, maybe lock the cockpit door,
link |
00:48:22.160
go over to the nearest airport.
link |
00:48:24.480
And there's so much other technology in our world
link |
00:48:28.080
as well now, where it's really becoming quite timely
link |
00:48:31.320
to put in some sort of very basic values like this.
link |
00:48:34.120
Even in cars, we've had enough vehicle terrorism attacks
link |
00:48:39.240
by now, where people have driven trucks and vans
link |
00:48:42.040
into pedestrians, that it's not at all a crazy idea
link |
00:48:45.480
to just have that hardwired into the car.
link |
00:48:48.680
Because yeah, there are a lot of,
link |
00:48:50.280
there's always going to be people who for some reason
link |
00:48:52.240
want to harm others, but most of those people
link |
00:48:54.800
don't have the technical expertise to figure out
link |
00:48:56.760
how to work around something like that.
link |
00:48:58.520
So if the car just won't do it, it helps.
link |
00:49:01.760
So let's start there.
link |
00:49:02.840
So there's a lot of, that's a great point.
link |
00:49:04.960
So not chasing perfect.
link |
00:49:06.800
There's a lot of things that most of the world agrees on.
link |
00:49:10.840
Yeah, let's start there.
link |
00:49:11.840
Let's start there.
link |
00:49:12.680
And then once we start there,
link |
00:49:14.560
we'll also get into the habit of having
link |
00:49:17.240
these kind of conversations about, okay,
link |
00:49:18.520
what else should we put in here and have these discussions?
link |
00:49:21.760
This should be a gradual process then.
link |
00:49:23.920
Great, so, but that also means describing these things
link |
00:49:28.600
and describing it to a machine.
link |
00:49:31.240
So one thing, we had a few conversations
link |
00:49:34.200
with Stephen Wolfram.
link |
00:49:35.640
I'm not sure if you're familiar with Stephen.
link |
00:49:37.080
Oh yeah, I know him quite well.
link |
00:49:38.360
So he is, he works with a bunch of things,
link |
00:49:42.040
but cellular automata, these simple computable things,
link |
00:49:46.560
these computation systems.
link |
00:49:47.960
And he kind of mentioned that,
link |
00:49:49.880
we probably have already within these systems
link |
00:49:52.480
already something that's AGI,
link |
00:49:56.120
meaning like we just don't know it
link |
00:49:58.720
because we can't talk to it.
link |
00:50:00.400
So if you give me this chance to try to at least
link |
00:50:04.800
form a question out of this is,
link |
00:50:07.600
I think it's an interesting idea to think
link |
00:50:10.880
that we can have intelligent systems,
link |
00:50:12.680
but we don't know how to describe something to them
link |
00:50:15.600
and they can't communicate with us.
link |
00:50:17.360
I know you're doing a little bit of work in explainable AI,
link |
00:50:19.840
trying to get AI to explain itself.
link |
00:50:22.040
So what are your thoughts of natural language processing
link |
00:50:25.520
or some kind of other communication?
link |
00:50:27.640
How does the AI explain something to us?
link |
00:50:30.120
How do we explain something to it, to machines?
link |
00:50:33.640
Or you think of it differently?
link |
00:50:35.320
So there are two separate parts to your question there.
link |
00:50:39.960
One of them has to do with communication,
link |
00:50:42.440
which is super interesting, I'll get to that in a sec.
link |
00:50:44.440
The other is whether we already have AGI
link |
00:50:47.280
but we just haven't noticed it there.
link |
00:50:49.240
Right.
link |
00:50:51.800
There I beg to differ.
link |
00:50:54.280
I don't think there's anything in any cellular automaton
link |
00:50:56.480
or anything or the internet itself or whatever
link |
00:50:59.040
that has artificial general intelligence
link |
00:51:03.560
and that it can really do exactly everything
link |
00:51:05.520
we humans can do better.
link |
00:51:07.000
I think the day that happens, when that happens,
link |
00:51:11.600
we will very soon notice, we'll probably notice even before
link |
00:51:15.600
because in a very, very big way.
link |
00:51:17.440
But for the second part, though.
link |
00:51:18.840
Wait, can I ask, sorry.
link |
00:51:20.720
So, because you have this beautiful way
link |
00:51:24.400
to formulating consciousness as information processing,
link |
00:51:30.360
and you can think of intelligence
link |
00:51:31.360
as information processing,
link |
00:51:32.280
and you can think of the entire universe
link |
00:51:34.320
as these particles and these systems roaming around
link |
00:51:38.720
that have this information processing power.
link |
00:51:41.360
You don't think there is something with the power
link |
00:51:44.840
to process information in the way that we human beings do
link |
00:51:49.040
that's out there that needs to be sort of connected to.
link |
00:51:55.400
It seems a little bit philosophical, perhaps,
link |
00:51:57.880
but there's something compelling to the idea
link |
00:52:00.080
that the power is already there,
link |
00:52:01.920
which the focus should be more on being able
link |
00:52:05.440
to communicate with it.
link |
00:52:07.360
Well, I agree that in a certain sense,
link |
00:52:11.960
the hardware processing power is already out there
link |
00:52:15.360
because our universe itself can think of it
link |
00:52:19.000
as being a computer already, right?
link |
00:52:21.000
It's constantly computing what water waves,
link |
00:52:23.800
how it devolved the water waves in the River Charles
link |
00:52:26.120
and how to move the air molecules around.
link |
00:52:28.440
Seth Lloyd has pointed out, my colleague here,
link |
00:52:30.480
that you can even in a very rigorous way
link |
00:52:32.920
think of our entire universe as being a quantum computer.
link |
00:52:35.480
It's pretty clear that our universe
link |
00:52:37.680
supports this amazing processing power
link |
00:52:40.320
because you can even,
link |
00:52:42.160
within this physics computer that we live in, right?
link |
00:52:44.920
We can even build actual laptops and stuff,
link |
00:52:47.040
so clearly the power is there.
link |
00:52:49.000
It's just that most of the compute power that nature has,
link |
00:52:52.040
it's, in my opinion, kind of wasting on boring stuff
link |
00:52:54.240
like simulating yet another ocean wave somewhere
link |
00:52:56.520
where no one is even looking, right?
link |
00:52:58.040
So in a sense, what life does, what we are doing
link |
00:53:00.880
when we build computers is we're rechanneling
link |
00:53:03.880
all this compute that nature is doing anyway
link |
00:53:07.200
into doing things that are more interesting
link |
00:53:09.360
than just yet another ocean wave,
link |
00:53:11.440
and let's do something cool here.
link |
00:53:14.080
So the raw hardware power is there, for sure,
link |
00:53:17.080
but then even just computing what's going to happen
link |
00:53:21.080
for the next five seconds in this water bottle,
link |
00:53:23.520
takes a ridiculous amount of compute
link |
00:53:26.000
if you do it on a human computer.
link |
00:53:27.920
This water bottle just did it.
link |
00:53:29.920
But that does not mean that this water bottle has AGI
link |
00:53:34.760
because AGI means it should also be able to,
link |
00:53:37.040
like I've written my book, done this interview.
link |
00:53:40.160
And I don't think it's just communication problems.
link |
00:53:42.080
I don't really think it can do it.
link |
00:53:46.760
Although Buddhists say when they watch the water
link |
00:53:49.280
and that there is some beauty,
link |
00:53:51.240
that there's some depth and beauty in nature
link |
00:53:53.720
that they can communicate with.
link |
00:53:54.840
Communication is also very important though
link |
00:53:56.480
because I mean, look, part of my job is being a teacher.
link |
00:54:01.200
And I know some very intelligent professors even
link |
00:54:06.200
who just have a bit of hard time communicating.
link |
00:54:09.800
They come up with all these brilliant ideas,
link |
00:54:12.640
but to communicate with somebody else,
link |
00:54:14.520
you have to also be able to simulate their own mind.
link |
00:54:16.920
Yes, empathy.
link |
00:54:18.360
Build well enough and understand model of their mind
link |
00:54:20.640
that you can say things that they will understand.
link |
00:54:24.400
And that's quite difficult.
link |
00:54:26.480
And that's why today it's so frustrating
link |
00:54:28.280
if you have a computer that makes some cancer diagnosis
link |
00:54:32.600
and you ask it, well, why are you saying
link |
00:54:34.120
I should have this surgery?
link |
00:54:36.120
And if it can only reply,
link |
00:54:37.960
I was trained on five terabytes of data
link |
00:54:40.800
and this is my diagnosis, boop, boop, beep, beep.
link |
00:54:45.080
It doesn't really instill a lot of confidence, right?
link |
00:54:49.120
So I think we have a lot of work to do
link |
00:54:51.120
on communication there.
link |
00:54:54.320
So what kind of, I think you're doing a little bit of work
link |
00:54:58.040
in explainable AI.
link |
00:54:59.320
What do you think are the most promising avenues?
link |
00:55:01.320
Is it mostly about sort of the Alexa problem
link |
00:55:05.240
of natural language processing of being able
link |
00:55:07.200
to actually use human interpretable methods
link |
00:55:11.600
of communication?
link |
00:55:13.160
So being able to talk to a system and it talk back to you,
link |
00:55:16.000
or is there some more fundamental problems to be solved?
link |
00:55:18.640
I think it's all of the above.
link |
00:55:21.160
The natural language processing is obviously important,
link |
00:55:23.520
but there are also more nerdy fundamental problems.
link |
00:55:27.600
Like if you take, you play chess?
link |
00:55:31.640
Of course, I'm Russian.
link |
00:55:33.040
I have to.
link |
00:55:33.880
You speak Russian?
link |
00:55:34.720
Yes, I speak Russian.
link |
00:55:35.560
Excellent, I didn't know.
link |
00:55:38.040
When did you learn Russian?
link |
00:55:39.160
I speak very bad Russian, I'm only an autodidact,
link |
00:55:41.800
but I bought a book, Teach Yourself Russian,
link |
00:55:44.560
read a lot, but it was very difficult.
link |
00:55:47.720
Wow.
link |
00:55:48.560
That's why I speak so bad.
link |
00:55:49.960
How many languages do you know?
link |
00:55:51.960
Wow, that's really impressive.
link |
00:55:53.840
I don't know, my wife has some calculation,
link |
00:55:56.320
but my point was, if you play chess,
link |
00:55:58.400
have you looked at the AlphaZero games?
link |
00:56:01.040
The actual games, no.
link |
00:56:02.600
Check it out, some of them are just mind blowing,
link |
00:56:06.320
really beautiful.
link |
00:56:07.720
And if you ask, how did it do that?
link |
00:56:13.760
You go talk to Demis Hassabis,
link |
00:56:16.520
I know others from DeepMind,
link |
00:56:19.120
all they'll ultimately be able to give you
link |
00:56:20.600
is big tables of numbers, matrices,
link |
00:56:23.920
that define the neural network.
link |
00:56:25.720
And you can stare at these tables of numbers
link |
00:56:28.080
till your face turn blue,
link |
00:56:29.600
and you're not gonna understand much
link |
00:56:32.520
about why it made that move.
link |
00:56:34.520
And even if you have natural language processing
link |
00:56:37.640
that can tell you in human language about,
link |
00:56:40.280
oh, five, seven, points, two, eight,
link |
00:56:42.520
still not gonna really help.
link |
00:56:43.560
So I think there's a whole spectrum of fun challenges
link |
00:56:47.480
that are involved in taking a computation
link |
00:56:50.520
that does intelligent things
link |
00:56:52.240
and transforming it into something equally good,
link |
00:56:57.760
equally intelligent, but that's more understandable.
link |
00:57:01.840
And I think that's really valuable
link |
00:57:03.240
because I think as we put machines in charge
link |
00:57:07.440
of ever more infrastructure in our world,
link |
00:57:09.760
the power grid, the trading on the stock market,
link |
00:57:12.680
weapon systems and so on,
link |
00:57:14.320
it's absolutely crucial that we can trust
link |
00:57:17.760
these AIs to do all we want.
link |
00:57:19.400
And trust really comes from understanding
link |
00:57:22.520
in a very fundamental way.
link |
00:57:24.400
And that's why I'm working on this,
link |
00:57:27.560
because I think the more,
link |
00:57:29.160
if we're gonna have some hope of ensuring
link |
00:57:31.840
that machines have adopted our goals
link |
00:57:33.520
and that they're gonna retain them,
link |
00:57:35.800
that kind of trust, I think,
link |
00:57:38.800
needs to be based on things you can actually understand,
link |
00:57:41.200
preferably even improve theorems on.
link |
00:57:44.240
Even with a self driving car, right?
link |
00:57:47.040
If someone just tells you it's been trained
link |
00:57:48.680
on tons of data and it never crashed,
link |
00:57:50.640
it's less reassuring than if someone actually has a proof.
link |
00:57:54.200
Maybe it's a computer verified proof,
link |
00:57:55.960
but still it says that under no circumstances
link |
00:57:58.800
is this car just gonna swerve into oncoming traffic.
link |
00:58:02.320
And that kind of information helps to build trust
link |
00:58:04.640
and helps build the alignment of goals,
link |
00:58:09.400
at least awareness that your goals, your values are aligned.
link |
00:58:12.200
And I think even in the very short term,
link |
00:58:13.840
if you look at how, you know, today, right?
link |
00:58:16.360
This absolutely pathetic state of cybersecurity
link |
00:58:19.320
that we have, where is it?
link |
00:58:21.720
Three billion Yahoo accounts we can't pack,
link |
00:58:27.200
almost every American's credit card and so on.
link |
00:58:32.800
Why is this happening?
link |
00:58:34.120
It's ultimately happening because we have software
link |
00:58:37.960
that nobody fully understood how it worked.
link |
00:58:41.200
That's why the bugs hadn't been found, right?
link |
00:58:44.800
And I think AI can be used very effectively
link |
00:58:47.480
for offense, for hacking,
link |
00:58:49.640
but it can also be used for defense.
link |
00:58:52.320
Hopefully automating verifiability
link |
00:58:55.360
and creating systems that are built in different ways
link |
00:59:00.680
so you can actually prove things about them.
link |
00:59:02.920
And it's important.
link |
00:59:05.240
So speaking of software that nobody understands
link |
00:59:07.680
how it works, of course, a bunch of people ask
link |
00:59:10.640
about your paper, about your thoughts
link |
00:59:12.160
of why does deep and cheap learning work so well?
link |
00:59:14.680
That's the paper.
link |
00:59:15.520
But what are your thoughts on deep learning?
link |
00:59:18.320
These kind of simplified models of our own brains
link |
00:59:21.880
have been able to do some successful perception work,
link |
00:59:26.440
pattern recognition work, and now with AlphaZero and so on,
link |
00:59:29.560
do some clever things.
link |
00:59:30.880
What are your thoughts about the promise limitations
link |
00:59:33.880
of this piece?
link |
00:59:35.680
Great, I think there are a number of very important insights,
link |
00:59:43.080
very important lessons we can always draw
link |
00:59:44.640
from these kinds of successes.
link |
00:59:47.120
One of them is when you look at the human brain,
link |
00:59:48.960
you see it's very complicated, 10th of 11 neurons,
link |
00:59:51.480
and there are all these different kinds of neurons
link |
00:59:53.320
and yada, yada, and there's been this long debate
link |
00:59:55.040
about whether the fact that we have dozens
link |
00:59:57.200
of different kinds is actually necessary for intelligence.
link |
01:00:01.560
We can now, I think, quite convincingly answer
link |
01:00:03.360
that question of no, it's enough to have just one kind.
link |
01:00:07.640
If you look under the hood of AlphaZero,
link |
01:00:09.920
there's only one kind of neuron
link |
01:00:11.080
and it's ridiculously simple mathematical thing.
link |
01:00:15.000
So it's just like in physics,
link |
01:00:17.280
it's not, if you have a gas with waves in it,
link |
01:00:20.320
it's not the detailed nature of the molecule that matter,
link |
01:00:24.240
it's the collective behavior somehow.
link |
01:00:26.040
Similarly, it's this higher level structure
link |
01:00:30.720
of the network that matters,
link |
01:00:31.760
not that you have 20 kinds of neurons.
link |
01:00:34.080
I think our brain is such a complicated mess
link |
01:00:37.040
because it wasn't evolved just to be intelligent,
link |
01:00:41.720
it was involved to also be self assembling
link |
01:00:47.000
and self repairing, right?
link |
01:00:48.760
And evolutionarily attainable.
link |
01:00:51.920
And so on and so on.
link |
01:00:53.560
So I think it's pretty,
link |
01:00:54.720
my hunch is that we're going to understand
link |
01:00:57.040
how to build AGI before we fully understand
link |
01:00:59.520
how our brains work, just like we understood
link |
01:01:02.600
how to build flying machines long before
link |
01:01:05.560
we were able to build a mechanical bird.
link |
01:01:07.800
Yeah, that's right.
link |
01:01:08.640
You've given the example exactly of mechanical birds
link |
01:01:13.280
and airplanes and airplanes do a pretty good job
link |
01:01:15.680
of flying without really mimicking bird flight.
link |
01:01:18.560
And even now after 100 years later,
link |
01:01:20.920
did you see the Ted talk with this German mechanical bird?
link |
01:01:23.880
I heard you mention it.
link |
01:01:25.040
Check it out, it's amazing.
link |
01:01:26.520
But even after that, right,
link |
01:01:27.760
we still don't fly in mechanical birds
link |
01:01:29.360
because it turned out the way we came up with was simpler
link |
01:01:32.720
and it's better for our purposes.
link |
01:01:33.840
And I think it might be the same there.
link |
01:01:35.280
That's one lesson.
link |
01:01:37.520
And another lesson, it's more what our paper was about.
link |
01:01:42.640
First, as a physicist thought it was fascinating
link |
01:01:45.800
how there's a very close mathematical relationship
link |
01:01:48.240
actually between our artificial neural networks
link |
01:01:50.800
and a lot of things that we've studied for in physics
link |
01:01:54.560
go by nerdy names like the renormalization group equation
link |
01:01:57.520
and Hamiltonians and yada, yada, yada.
link |
01:01:59.800
And when you look a little more closely at this,
link |
01:02:05.720
you have,
link |
01:02:10.320
at first I was like, well, there's something crazy here
link |
01:02:12.360
that doesn't make sense.
link |
01:02:13.520
Because we know that if you even want to build
link |
01:02:19.200
a super simple neural network to tell apart cat pictures
link |
01:02:22.560
and dog pictures, right,
link |
01:02:23.400
that you can do that very, very well now.
link |
01:02:25.400
But if you think about it a little bit,
link |
01:02:27.520
you convince yourself it must be impossible
link |
01:02:29.080
because if I have one megapixel,
link |
01:02:31.920
even if each pixel is just black or white,
link |
01:02:34.160
there's two to the power of 1 million possible images,
link |
01:02:36.960
which is way more than there are atoms in our universe,
link |
01:02:38.960
right, so in order to,
link |
01:02:42.040
and then for each one of those,
link |
01:02:43.200
I have to assign a number,
link |
01:02:44.640
which is the probability that it's a dog.
link |
01:02:47.080
So an arbitrary function of images
link |
01:02:49.440
is a list of more numbers than there are atoms in our universe.
link |
01:02:54.440
So clearly I can't store that under the hood of my GPU
link |
01:02:57.360
or my computer, yet somehow it works.
link |
01:03:00.640
So what does that mean?
link |
01:03:01.480
Well, it means that out of all of the problems
link |
01:03:04.960
that you could try to solve with a neural network,
link |
01:03:10.120
almost all of them are impossible to solve
link |
01:03:12.880
with a reasonably sized one.
link |
01:03:15.480
But then what we showed in our paper
link |
01:03:17.440
was that the fraction, the kind of problems,
link |
01:03:22.360
the fraction of all the problems
link |
01:03:23.800
that you could possibly pose,
link |
01:03:26.520
that we actually care about given the laws of physics
link |
01:03:29.480
is also an infinite testimony, tiny little part.
link |
01:03:32.480
And amazingly, they're basically the same part.
link |
01:03:35.440
Yeah, it's almost like our world was created for,
link |
01:03:37.560
I mean, they kind of come together.
link |
01:03:39.000
Yeah, well, you could say maybe where the world was created
link |
01:03:42.800
for us, but I have a more modest interpretation,
link |
01:03:44.960
which is that the world was created for us,
link |
01:03:46.680
but I have a more modest interpretation,
link |
01:03:48.040
which is that instead evolution endowed us
link |
01:03:50.360
with neural networks precisely for that reason.
link |
01:03:53.120
Because this particular architecture,
link |
01:03:54.640
as opposed to the one in your laptop,
link |
01:03:56.040
is very, very well adapted to solving the kind of problems
link |
01:04:02.480
that nature kept presenting our ancestors with.
link |
01:04:05.560
So it makes sense that why do we have a brain
link |
01:04:08.120
in the first place?
link |
01:04:09.280
It's to be able to make predictions about the future
link |
01:04:11.880
and so on.
link |
01:04:12.880
So if we had a sucky system, which could never solve it,
link |
01:04:16.440
we wouldn't have a world.
link |
01:04:18.280
So this is, I think, a very beautiful fact.
link |
01:04:23.680
Yeah.
link |
01:04:24.520
We also realize that there's been earlier work
link |
01:04:29.000
on why deeper networks are good,
link |
01:04:32.040
but we were able to show an additional cool fact there,
link |
01:04:34.680
which is that even incredibly simple problems,
link |
01:04:38.360
like suppose I give you a thousand numbers
link |
01:04:41.080
and ask you to multiply them together,
link |
01:04:42.720
and you can write a few lines of code, boom, done, trivial.
link |
01:04:46.680
If you just try to do that with a neural network
link |
01:04:49.520
that has only one single hidden layer in it,
link |
01:04:52.440
you can do it,
link |
01:04:54.320
but you're going to need two to the power of a thousand
link |
01:04:57.360
neurons to multiply a thousand numbers,
link |
01:05:00.920
which is, again, more neurons than there are atoms
link |
01:05:02.520
in our universe.
link |
01:05:04.600
That's fascinating.
link |
01:05:05.480
But if you allow yourself to make it a deep network
link |
01:05:09.960
with many layers, you only need 4,000 neurons.
link |
01:05:13.240
It's perfectly feasible.
link |
01:05:16.400
That's really interesting.
link |
01:05:17.960
Yeah.
link |
01:05:18.800
So on another architecture type,
link |
01:05:21.040
I mean, you mentioned Schrodinger's equation,
link |
01:05:22.720
and what are your thoughts about quantum computing
link |
01:05:27.240
and the role of this kind of computational unit
link |
01:05:32.400
in creating an intelligence system?
link |
01:05:34.880
In some Hollywood movies that I will not mention by name
link |
01:05:39.520
because I don't want to spoil them.
link |
01:05:41.040
The way they get AGI is building a quantum computer.
link |
01:05:45.480
Because the word quantum sounds cool and so on.
link |
01:05:47.600
That's right.
link |
01:05:50.040
First of all, I think we don't need quantum computers
link |
01:05:52.880
to build AGI.
link |
01:05:54.920
I suspect your brain is not a quantum computer
link |
01:05:59.240
in any profound sense.
link |
01:06:01.600
So you don't even wrote a paper about that
link |
01:06:03.200
a lot many years ago.
link |
01:06:04.560
I calculated the so called decoherence time,
link |
01:06:08.120
how long it takes until the quantum computerness
link |
01:06:10.320
of what your neurons are doing gets erased
link |
01:06:15.320
by just random noise from the environment.
link |
01:06:17.960
And it's about 10 to the minus 21 seconds.
link |
01:06:21.320
So as cool as it would be to have a quantum computer
link |
01:06:24.600
in my head, I don't think that fast.
link |
01:06:27.320
On the other hand,
link |
01:06:28.360
there are very cool things you could do
link |
01:06:33.040
with quantum computers.
link |
01:06:35.240
Or I think we'll be able to do soon
link |
01:06:37.480
when we get bigger ones.
link |
01:06:39.360
That might actually help machine learning
link |
01:06:40.960
do even better than the brain.
link |
01:06:43.160
So for example,
link |
01:06:47.040
one, this is just a moonshot,
link |
01:06:50.760
but learning is very much same thing as search.
link |
01:07:01.800
If you're trying to train a neural network
link |
01:07:03.160
to get really learned to do something really well,
link |
01:07:06.240
you have some loss function,
link |
01:07:07.280
you have a bunch of knobs you can turn,
link |
01:07:10.360
represented by a bunch of numbers,
link |
01:07:12.080
and you're trying to tweak them
link |
01:07:12.920
so that it becomes as good as possible at this thing.
link |
01:07:15.080
So if you think of a landscape with some valley,
link |
01:07:20.720
where each dimension of the landscape
link |
01:07:22.120
corresponds to some number you can change,
link |
01:07:24.120
you're trying to find the minimum.
link |
01:07:25.640
And it's well known that
link |
01:07:26.760
if you have a very high dimensional landscape,
link |
01:07:29.040
complicated things, it's super hard to find the minimum.
link |
01:07:31.840
Quantum mechanics is amazingly good at this.
link |
01:07:35.840
Like if I want to know what's the lowest energy state
link |
01:07:38.240
this water can possibly have,
link |
01:07:41.720
incredibly hard to compute,
link |
01:07:42.560
but nature will happily figure this out for you
link |
01:07:45.400
if you just cool it down, make it very, very cold.
link |
01:07:49.800
If you put a ball somewhere,
link |
01:07:50.880
it'll roll down to its minimum.
link |
01:07:52.240
And this happens metaphorically
link |
01:07:54.280
at the energy landscape too.
link |
01:07:56.320
And quantum mechanics even uses some clever tricks,
link |
01:07:59.280
which today's machine learning systems don't.
link |
01:08:02.520
Like if you're trying to find the minimum
link |
01:08:04.160
and you get stuck in the little local minimum here,
link |
01:08:06.960
in quantum mechanics you can actually tunnel
link |
01:08:08.760
through the barrier and get unstuck again.
link |
01:08:13.480
That's really interesting.
link |
01:08:14.320
Yeah, so it may be, for example,
link |
01:08:16.120
that we'll one day use quantum computers
link |
01:08:19.160
that help train neural networks better.
link |
01:08:22.840
That's really interesting.
link |
01:08:23.680
Okay, so as a component of kind of the learning process,
link |
01:08:27.040
for example.
link |
01:08:27.880
Yeah.
link |
01:08:29.440
Let me ask sort of wrapping up here a little bit,
link |
01:08:33.080
let me return to the questions of our human nature
link |
01:08:36.880
and love, as I mentioned.
link |
01:08:40.000
So do you think,
link |
01:08:44.280
you mentioned sort of a helper robot,
link |
01:08:46.000
but you could think of also personal robots.
link |
01:08:48.640
Do you think the way we human beings fall in love
link |
01:08:52.480
and get connected to each other
link |
01:08:54.680
is possible to achieve in an AI system
link |
01:08:58.040
and human level AI intelligence system?
link |
01:09:00.360
Do you think we would ever see that kind of connection?
link |
01:09:03.720
Or, you know, in all this discussion
link |
01:09:06.160
about solving complex goals,
link |
01:09:08.520
is this kind of human social connection,
link |
01:09:10.760
do you think that's one of the goals
link |
01:09:12.560
on the peaks and valleys with the raising sea levels
link |
01:09:16.280
that we'll be able to achieve?
link |
01:09:17.360
Or do you think that's something that's ultimately,
link |
01:09:20.040
or at least in the short term,
link |
01:09:21.760
relative to the other goals is not achievable?
link |
01:09:23.640
I think it's all possible.
link |
01:09:25.120
And I mean, in recent,
link |
01:09:27.600
there's a very wide range of guesses, as you know,
link |
01:09:30.840
among AI researchers, when we're going to get AGI.
link |
01:09:35.120
Some people, you know, like our friend Rodney Brooks
link |
01:09:37.640
says it's going to be hundreds of years at least.
link |
01:09:41.040
And then there are many others
link |
01:09:42.200
who think it's going to happen much sooner.
link |
01:09:44.040
And recent polls,
link |
01:09:46.840
maybe half or so of AI researchers
link |
01:09:48.640
think we're going to get AGI within decades.
link |
01:09:50.920
So if that happens, of course,
link |
01:09:52.720
then I think these things are all possible.
link |
01:09:55.040
But in terms of whether it will happen,
link |
01:09:56.840
I think we shouldn't spend so much time asking
link |
01:10:00.600
what do we think will happen in the future?
link |
01:10:03.240
As if we are just some sort of pathetic,
link |
01:10:05.160
your passive bystanders, you know,
link |
01:10:07.040
waiting for the future to happen to us.
link |
01:10:09.280
Hey, we're the ones creating this future, right?
link |
01:10:11.640
So we should be proactive about it
link |
01:10:15.520
and ask ourselves what sort of future
link |
01:10:16.920
we would like to have happen.
link |
01:10:18.240
We're going to make it like that.
link |
01:10:19.920
Well, what I prefer is just some sort of incredibly boring,
link |
01:10:22.720
zombie like future where there's all these
link |
01:10:24.320
mechanical things happening and there's no passion,
link |
01:10:26.040
no emotion, no experience, maybe even.
link |
01:10:29.600
No, I would of course, much rather prefer it
link |
01:10:32.040
if all the things that we find that we value the most
link |
01:10:36.240
about humanity are our subjective experience,
link |
01:10:40.680
passion, inspiration, love, you know.
link |
01:10:43.000
If we can create a future where those things do happen,
link |
01:10:48.000
where those things do exist, you know,
link |
01:10:50.840
I think ultimately it's not our universe
link |
01:10:54.560
giving meaning to us, it's us giving meaning to our universe.
link |
01:10:57.960
And if we build more advanced intelligence,
link |
01:11:01.840
let's make sure we build it in such a way
link |
01:11:03.680
that meaning is part of it.
link |
01:11:09.120
A lot of people that seriously study this problem
link |
01:11:11.400
and think of it from different angles
link |
01:11:13.600
have trouble in the majority of cases,
link |
01:11:16.880
if they think through that happen,
link |
01:11:19.160
are the ones that are not beneficial to humanity.
link |
01:11:22.520
And so, yeah, so what are your thoughts?
link |
01:11:25.560
What's should people, you know,
link |
01:11:29.400
I really don't like people to be terrified.
link |
01:11:33.440
What's a way for people to think about it
link |
01:11:35.040
in a way we can solve it and we can make it better?
link |
01:11:39.600
No, I don't think panicking is going to help in any way.
link |
01:11:42.960
It's not going to increase chances
link |
01:11:44.840
of things going well either.
link |
01:11:45.880
Even if you are in a situation where there is a real threat,
link |
01:11:48.400
does it help if everybody just freaks out?
link |
01:11:51.080
No, of course, of course not.
link |
01:11:53.640
I think, yeah, there are of course ways
link |
01:11:56.600
in which things can go horribly wrong.
link |
01:11:59.560
First of all, it's important when we think about this thing,
link |
01:12:03.680
about the problems and risks,
link |
01:12:05.280
to also remember how huge the upsides can be
link |
01:12:07.160
if we get it right, right?
link |
01:12:08.440
Everything we love about society and civilization
link |
01:12:12.360
is a product of intelligence.
link |
01:12:13.400
So if we can amplify our intelligence
link |
01:12:15.320
with machine intelligence and not anymore lose our loved one
link |
01:12:18.760
to what we're told is an incurable disease
link |
01:12:21.080
and things like this, of course, we should aspire to that.
link |
01:12:24.800
So that can be a motivator, I think,
link |
01:12:26.680
reminding ourselves that the reason we try to solve problems
link |
01:12:29.120
is not just because we're trying to avoid gloom,
link |
01:12:33.520
but because we're trying to do something great.
link |
01:12:35.760
But then in terms of the risks,
link |
01:12:37.680
I think the really important question is to ask,
link |
01:12:42.680
what can we do today that will actually help
link |
01:12:45.480
make the outcome good, right?
link |
01:12:47.320
And dismissing the risk is not one of them.
link |
01:12:51.240
I find it quite funny often when I'm in discussion panels
link |
01:12:54.800
about these things,
link |
01:12:55.960
how the people who work for companies,
link |
01:13:01.200
always be like, oh, nothing to worry about,
link |
01:13:03.120
nothing to worry about, nothing to worry about.
link |
01:13:04.760
And it's only academics sometimes express concerns.
link |
01:13:09.600
That's not surprising at all if you think about it.
link |
01:13:11.880
Right.
link |
01:13:12.880
Upton Sinclair quipped, right,
link |
01:13:15.200
that it's hard to make a man believe in something
link |
01:13:18.040
when his income depends on not believing in it.
link |
01:13:20.120
And frankly, we know a lot of these people in companies
link |
01:13:24.080
that they're just as concerned as anyone else.
link |
01:13:26.240
But if you're the CEO of a company,
link |
01:13:28.480
that's not something you want to go on record saying
link |
01:13:30.280
when you have silly journalists who are gonna put a picture
link |
01:13:33.440
of a Terminator robot when they quote you.
link |
01:13:35.720
So the issues are real.
link |
01:13:39.040
And the way I think about what the issue is,
link |
01:13:41.920
is basically the real choice we have is,
link |
01:13:48.040
first of all, are we gonna just dismiss the risks
link |
01:13:50.840
and say, well, let's just go ahead and build machines
link |
01:13:54.480
that can do everything we can do better and cheaper.
link |
01:13:57.560
Let's just make ourselves obsolete as fast as possible.
link |
01:14:00.200
What could possibly go wrong?
link |
01:14:01.720
That's one attitude.
link |
01:14:03.440
The opposite attitude, I think, is to say,
link |
01:14:06.400
here's this incredible potential,
link |
01:14:08.800
let's think about what kind of future
link |
01:14:11.960
we're really, really excited about.
link |
01:14:14.640
What are the shared goals that we can really aspire towards?
link |
01:14:18.480
And then let's think really hard
link |
01:14:19.960
about how we can actually get there.
link |
01:14:22.000
So start with, don't start thinking about the risks,
link |
01:14:24.160
start thinking about the goals.
link |
01:14:26.720
And then when you do that,
link |
01:14:28.200
then you can think about the obstacles you want to avoid.
link |
01:14:30.480
I often get students coming in right here into my office
link |
01:14:32.840
for career advice.
link |
01:14:34.120
I always ask them this very question,
link |
01:14:35.560
where do you want to be in the future?
link |
01:14:37.920
If all she can say is, oh, maybe I'll have cancer,
link |
01:14:40.640
maybe I'll get run over by a truck.
link |
01:14:42.480
Yeah, focus on the obstacles instead of the goals.
link |
01:14:44.280
She's just going to end up a hypochondriac paranoid.
link |
01:14:47.920
Whereas if she comes in and fire in her eyes
link |
01:14:49.920
and is like, I want to be there.
link |
01:14:51.840
And then we can talk about the obstacles
link |
01:14:53.960
and see how we can circumvent them.
link |
01:14:55.760
That's, I think, a much, much healthier attitude.
link |
01:14:58.880
And I feel it's very challenging to come up with a vision
link |
01:15:03.880
for the future, which we are unequivocally excited about.
link |
01:15:08.120
I'm not just talking now in the vague terms,
link |
01:15:10.320
like, yeah, let's cure cancer, fine.
link |
01:15:12.360
I'm talking about what kind of society
link |
01:15:14.720
do we want to create?
link |
01:15:15.840
What do we want it to mean to be human in the age of AI,
link |
01:15:20.360
in the age of AGI?
link |
01:15:22.840
So if we can have this conversation,
link |
01:15:25.360
broad, inclusive conversation,
link |
01:15:28.200
and gradually start converging towards some,
link |
01:15:31.400
some future that with some direction, at least,
link |
01:15:34.240
that we want to steer towards, right,
link |
01:15:35.400
then we'll be much more motivated
link |
01:15:38.160
to constructively take on the obstacles.
link |
01:15:39.960
And I think if I had, if I had to,
link |
01:15:43.560
if I try to wrap this up in a more succinct way,
link |
01:15:46.640
I think we can all agree already now
link |
01:15:51.480
that we should aspire to build AGI
link |
01:15:56.160
that doesn't overpower us, but that empowers us.
link |
01:16:05.160
And think of the many various ways that can do that,
link |
01:16:08.560
whether that's from my side of the world
link |
01:16:11.000
of autonomous vehicles.
link |
01:16:12.720
I'm personally actually from the camp
link |
01:16:14.720
that believes this human level intelligence
link |
01:16:16.800
is required to achieve something like vehicles
link |
01:16:20.480
that would actually be something we would enjoy using
link |
01:16:23.880
and being part of.
link |
01:16:25.120
So that's one example, and certainly there's a lot
link |
01:16:27.040
of other types of robots and medicine and so on.
link |
01:16:30.920
So focusing on those and then coming up with the obstacles,
link |
01:16:33.880
coming up with the ways that that can go wrong
link |
01:16:35.920
and solving those one at a time.
link |
01:16:38.160
And just because you can build an autonomous vehicle,
link |
01:16:41.520
even if you could build one
link |
01:16:42.800
that would drive just fine without you,
link |
01:16:45.080
maybe there are some things in life
link |
01:16:46.720
that we would actually want to do ourselves.
link |
01:16:48.400
That's right.
link |
01:16:49.240
Right, like, for example,
link |
01:16:51.400
if you think of our society as a whole,
link |
01:16:53.040
there are some things that we find very meaningful to do.
link |
01:16:57.200
And that doesn't mean we have to stop doing them
link |
01:16:59.640
just because machines can do them better.
link |
01:17:02.000
I'm not gonna stop playing tennis
link |
01:17:04.080
just the day someone builds a tennis robot and beat me.
link |
01:17:07.360
People are still playing chess and even go.
link |
01:17:09.600
Yeah, and in the very near term even,
link |
01:17:14.600
some people are advocating basic income, replace jobs.
link |
01:17:18.880
But if the government is gonna be willing
link |
01:17:20.840
to just hand out cash to people for doing nothing,
link |
01:17:24.040
then one should also seriously consider
link |
01:17:25.840
whether the government should also hire
link |
01:17:27.640
a lot more teachers and nurses
link |
01:17:29.480
and the kind of jobs which people often
link |
01:17:32.160
find great fulfillment in doing, right?
link |
01:17:34.440
We get very tired of hearing politicians saying,
link |
01:17:36.320
oh, we can't afford hiring more teachers,
link |
01:17:39.320
but we're gonna maybe have basic income.
link |
01:17:41.480
If we can have more serious research and thought
link |
01:17:44.000
into what gives meaning to our lives,
link |
01:17:46.200
the jobs give so much more than income, right?
link |
01:17:48.960
Mm hmm.
link |
01:17:50.520
And then think about in the future,
link |
01:17:53.320
what are the roles that we wanna have people
link |
01:18:00.000
continually feeling empowered by machines?
link |
01:18:03.040
And I think sort of, I come from Russia,
link |
01:18:06.120
from the Soviet Union.
link |
01:18:07.240
And I think for a lot of people in the 20th century,
link |
01:18:10.160
going to the moon, going to space was an inspiring thing.
link |
01:18:14.080
I feel like the universe of the mind,
link |
01:18:18.080
so AI, understanding, creating intelligence
link |
01:18:20.880
is that for the 21st century.
link |
01:18:23.240
So it's really surprising.
link |
01:18:24.400
And I've heard you mention this.
link |
01:18:25.640
It's really surprising to me,
link |
01:18:27.400
both on the research funding side,
link |
01:18:29.240
that it's not funded as greatly as it could be,
link |
01:18:31.760
but most importantly, on the politician side,
link |
01:18:34.760
that it's not part of the public discourse
link |
01:18:36.520
except in the killer bots terminator kind of view,
link |
01:18:40.800
that people are not yet, I think, perhaps excited
link |
01:18:44.880
by the possible positive future
link |
01:18:46.680
that we can build together.
link |
01:18:48.120
So we should be, because politicians usually just focus
link |
01:18:51.520
on the next election cycle, right?
link |
01:18:54.480
The single most important thing I feel we humans have learned
link |
01:18:57.160
in the entire history of science
link |
01:18:59.320
is they were the masters of underestimation.
link |
01:19:02.040
We underestimated the size of our cosmos again and again,
link |
01:19:08.480
realizing that everything we thought existed
link |
01:19:10.200
was just a small part of something grander, right?
link |
01:19:12.240
Planet, solar system, the galaxy, clusters of galaxies.
link |
01:19:16.640
The universe.
link |
01:19:18.440
And we now know that the future has just
link |
01:19:23.120
so much more potential
link |
01:19:25.160
than our ancestors could ever have dreamt of.
link |
01:19:27.640
This cosmos, imagine if all of Earth
link |
01:19:33.600
was completely devoid of life,
link |
01:19:36.640
except for Cambridge, Massachusetts.
link |
01:19:39.560
Wouldn't it be kind of lame if all we ever aspired to
link |
01:19:42.680
was to stay in Cambridge, Massachusetts forever
link |
01:19:45.560
and then go extinct in one week,
link |
01:19:47.160
even though Earth was gonna continue on for longer?
link |
01:19:49.760
That sort of attitude I think we have now
link |
01:19:54.200
on the cosmic scale, life can flourish on Earth,
link |
01:19:57.800
not for four years, but for billions of years.
link |
01:20:00.840
I can even tell you about how to move it out of harm's way
link |
01:20:02.920
when the sun gets too hot.
link |
01:20:04.840
And then we have so much more resources out here,
link |
01:20:09.520
which today, maybe there are a lot of other planets
link |
01:20:12.480
with bacteria or cow like life on them,
link |
01:20:14.960
but most of this, all this opportunity seems,
link |
01:20:19.880
as far as we can tell, to be largely dead,
link |
01:20:22.440
like the Sahara Desert.
link |
01:20:23.560
And yet we have the opportunity to help life flourish
link |
01:20:28.480
around this for billions of years.
link |
01:20:30.280
So let's quit squabbling about
link |
01:20:34.080
whether some little border should be drawn
link |
01:20:36.480
one mile to the left or right,
link |
01:20:38.440
and look up into the skies and realize,
link |
01:20:41.080
hey, we can do such incredible things.
link |
01:20:44.040
Yeah, and that's, I think, why it's really exciting
link |
01:20:46.640
that you and others are connected
link |
01:20:49.440
with some of the work Elon Musk is doing,
link |
01:20:51.880
because he's literally going out into that space,
link |
01:20:54.480
really exploring our universe, and it's wonderful.
link |
01:20:57.000
That is exactly why Elon Musk is so misunderstood, right?
link |
01:21:02.000
Misconstrued him as some kind of pessimistic doomsayer.
link |
01:21:05.000
The reason he cares so much about AI safety
link |
01:21:07.640
is because he more than almost anyone else appreciates
link |
01:21:12.080
these amazing opportunities that we'll squander
link |
01:21:14.280
if we wipe out here on Earth.
link |
01:21:16.640
We're not just going to wipe out the next generation,
link |
01:21:19.680
all generations, and this incredible opportunity
link |
01:21:23.320
that's out there, and that would really be a waste.
link |
01:21:25.400
And AI, for people who think that it would be better
link |
01:21:30.080
to do without technology, let me just mention that
link |
01:21:34.680
if we don't improve our technology,
link |
01:21:36.320
the question isn't whether humanity is going to go extinct.
link |
01:21:39.320
The question is just whether we're going to get taken out
link |
01:21:41.160
by the next big asteroid or the next super volcano
link |
01:21:44.800
or something else dumb that we could easily prevent
link |
01:21:48.280
with more tech, right?
link |
01:21:49.840
And if we want life to flourish throughout the cosmos,
link |
01:21:53.160
AI is the key to it.
link |
01:21:56.120
As I mentioned in a lot of detail in my book right there,
link |
01:21:59.840
even many of the most inspired sci fi writers,
link |
01:22:04.880
I feel have totally underestimated the opportunities
link |
01:22:08.120
for space travel, especially at the other galaxies,
link |
01:22:11.240
because they weren't thinking about the possibility of AGI,
link |
01:22:15.360
which just makes it so much easier.
link |
01:22:17.520
Right, yeah.
link |
01:22:18.440
So that goes to your view of AGI that enables our progress,
link |
01:22:24.080
that enables a better life.
link |
01:22:25.760
So that's a beautiful way to put it
link |
01:22:28.320
and then something to strive for.
link |
01:22:29.960
So Max, thank you so much.
link |
01:22:31.440
Thank you for your time today.
link |
01:22:32.560
It's been awesome.
link |
01:22:33.560
Thank you so much.
link |
01:22:34.400
Thanks.
link |
01:22:35.240
Have a great day.