back to index

Max Tegmark: Life 3.0 | Lex Fridman Podcast #1


small model | large model

link |
00:00:00.000
As part of MIT course 6S 099 Artificial General Intelligence, I've gotten the chance to sit
link |
00:00:05.060
down with Max Tagmark.
link |
00:00:06.740
He is a professor here at MIT, he's a physicist, spent a large part of his career studying the
link |
00:00:13.780
mysteries of our cosmological universe, but he's also studied and delved into the beneficial
link |
00:00:20.660
possibilities and the existential risks of artificial intelligence.
link |
00:00:25.860
Amongst many other things, he's the cofounder of the Future of Life Institute, author of
link |
00:00:32.220
two books, both of which I highly recommend.
link |
00:00:35.140
First, our mathematical universe, second is Life 3.0.
link |
00:00:40.220
He's truly an out of the box thinker and a fun personality, so I really enjoy talking
link |
00:00:45.060
to him.
link |
00:00:46.060
If you'd like to see more of these videos in the future, please subscribe and also click
link |
00:00:49.500
the little bell icon to make sure you don't miss any videos.
link |
00:00:52.980
Also, Twitter, LinkedIn, AGI.MIT.IDU, if you want to watch other lectures or conversations
link |
00:01:00.260
like this one.
link |
00:01:01.260
Better yet, go read Max's book, Life 3.0, chapter 7 on goals is my favorite.
link |
00:01:07.980
It's really where philosophy and engineering come together and it opens with a quote by
link |
00:01:12.300
Dostoevsky, the mystery of human existence lies not in just staying alive, but in finding
link |
00:01:18.460
something to live for.
link |
00:01:20.300
Lastly, I believe that every failure rewards us with an opportunity to learn, in that sense
link |
00:01:27.100
I've been very fortunate to fail in so many new and exciting ways and this conversation
link |
00:01:33.060
was no different.
link |
00:01:34.060
I've learned about something called Radio Frequency Interference, RFI, look it up.
link |
00:01:41.260
Apparently music and conversations from local radio stations can bleed into the audio that
link |
00:01:45.500
you're recording in such a way that almost completely ruins that audio.
link |
00:01:49.380
It's an exceptionally difficult sound source to remove.
link |
00:01:52.460
So, I've gotten the opportunity to learn how to avoid RFI in the future during recording
link |
00:01:59.620
sessions.
link |
00:02:00.620
I've also gotten the opportunity to learn how to use Adobe Audition and iZotope RX6
link |
00:02:06.260
to do some audio repair.
link |
00:02:11.740
Of course, this is an exceptionally difficult noise to remove.
link |
00:02:14.940
I am an engineer, I'm not an audio engineer, neither is anybody else in our group, but
link |
00:02:20.380
we did our best.
link |
00:02:21.780
Nevertheless, I thank you for your patience and I hope you're still able to enjoy this
link |
00:02:26.780
conversation.
link |
00:02:27.780
Do you think there's intelligent life out there in the universe?
link |
00:02:31.460
Let's open up with an easy question.
link |
00:02:33.420
I have a minority view here actually.
link |
00:02:36.260
When I give public lectures, I often ask for show of hands who thinks there's intelligent
link |
00:02:41.180
life out there somewhere else and almost everyone puts their hands up and when I ask why, they'll
link |
00:02:47.060
be like, oh, there's so many galaxies out there, there's got to be.
link |
00:02:52.060
But I'm a number nerd, right?
link |
00:02:54.660
So when you look more carefully at it, it's not so clear at all.
link |
00:02:59.180
When we talk about our universe, first of all, we don't mean all of space.
link |
00:03:03.140
We actually mean, I don't know, you can throw me the universe if you want, it's behind you
link |
00:03:05.900
there.
link |
00:03:06.900
We simply mean the spherical region of space from which light has had time to reach us
link |
00:03:14.540
so far during the 13.8 billion years since our big bang.
link |
00:03:19.460
There's more space here, but this is what we call a universe because that's all we have
link |
00:03:23.020
access to.
link |
00:03:24.140
So is there intelligent life here that's gotten to the point of building telescopes and computers?
link |
00:03:31.220
My guess is no, actually, the probability of it happening on any given planet is some
link |
00:03:39.500
number we don't know what it is.
link |
00:03:42.860
And what we do know is that the number can't be super high because there's over a billion
link |
00:03:49.340
Earth like planets in the Milky Way galaxy alone, many of which are billions of years
link |
00:03:54.780
older than Earth, and aside from some UFO believers, you know, there isn't much evidence
link |
00:04:01.740
that any super advanced civilization has come here at all.
link |
00:04:05.740
And so that's the famous Fermi paradox, right?
link |
00:04:08.700
And then if you work the numbers, what you find is that if you have no clue what the
link |
00:04:13.620
probability is of getting life on a given planet, so it could be 10 to the minus 10,
link |
00:04:18.500
10 to the minus 20, or 10 to the minus two, or any power of 10 is sort of equally likely
link |
00:04:23.620
if you want to be really open minded, that translates into it being equally likely that
link |
00:04:27.700
our nearest neighbor is 10 to the 16 meters away, 10 to the 17 meters away, 10 to the
link |
00:04:34.700
18.
link |
00:04:35.700
Now, by the time you get much less than 10 to the 16 already, we pretty much know there
link |
00:04:42.860
is nothing else that's close.
link |
00:04:46.220
And when you get because it would have discovered us, they, yeah, they would have discovered
link |
00:04:49.740
us longer or if they're really close, we would have probably noted some engineering projects
link |
00:04:53.540
that they're doing.
link |
00:04:54.540
And if it's beyond 10 to the 26 meters, that's already outside of here.
link |
00:05:00.140
So my guess is actually that there are, we are the only life in here that's gotten the
link |
00:05:06.340
point of building advanced tech, which I think is very, puts a lot of responsibility on our
link |
00:05:14.020
shoulders, not screw up, you know, I think people who take for granted that it's okay
link |
00:05:18.140
for us to screw up, have an accidental nuclear war or go extinct somehow because there's
link |
00:05:23.300
a sort of Star Trek like situation out there where some other life forms are going to come
link |
00:05:27.460
and bail us out and it doesn't matter so much.
link |
00:05:30.380
I think they're leveling us into a false sense of security.
link |
00:05:33.380
I think it's much more prudent to say, let's be really grateful for this amazing opportunity
link |
00:05:37.540
we've had and make the best of it just in case it is down to us.
link |
00:05:44.180
So from a physics perspective, do you think intelligent life, so it's unique from a sort
link |
00:05:50.220
of statistical view of the size of the universe, but from the basic matter of the universe,
link |
00:05:55.860
how difficult is it for intelligent life to come about with the kind of advanced tech
link |
00:06:00.100
building life is implied in your statement that it's really difficult to create something
link |
00:06:06.300
like a human species?
link |
00:06:07.620
Well, I think what we know is that going from no life to having life that can do our level
link |
00:06:14.740
of tech, there's some sort of to going beyond that than actually settling our whole universe
link |
00:06:21.140
with life.
link |
00:06:22.300
There's some road major roadblock there, which is some great filter as I just sometimes called
link |
00:06:30.700
which, which tough to get through, it's either that that roadblock is either behind us or
link |
00:06:37.180
in front of us.
link |
00:06:38.620
I'm hoping very much that it's behind us.
link |
00:06:40.980
I'm super excited every time we get a new report from NASA saying they failed to find
link |
00:06:46.900
any life on Mars, because that suggests that the hard part, maybe it was getting the first
link |
00:06:53.260
ribosome or some some very low level kind of stepping stone.
link |
00:06:59.540
So they were home free because if that's true, then the future is really only limited by
link |
00:07:03.620
our own imagination.
link |
00:07:04.620
It would be much suckier if it turns out that this level of life is kind of a diamond dozen,
link |
00:07:11.460
but maybe there's some other problem.
link |
00:07:12.780
Like as soon as a civilization gets advanced technology within 100 years, they get into
link |
00:07:17.220
some stupid fight with themselves and poof, you know, that would be a bummer.
link |
00:07:21.740
Yeah.
link |
00:07:22.740
So you've explored the mysteries of the universe, the cosmological universe, the one that's
link |
00:07:28.980
between us today, I think you've also begun to explore the other universe, which is sort
link |
00:07:36.340
of the mystery, the mysterious universe of the mind of intelligence, of intelligent life.
link |
00:07:42.860
So is there a common thread between your interests or the way you think about space and intelligence?
link |
00:07:48.260
Oh, yeah.
link |
00:07:49.260
When I was a teenager, I was already very fascinated by the biggest questions and I felt that the
link |
00:07:57.700
two biggest mysteries of all in science were our universe out there and our universe in
link |
00:08:03.660
here.
link |
00:08:04.660
Yeah.
link |
00:08:05.660
So it's quite natural after having spent a quarter of a century on my career thinking
link |
00:08:11.260
a lot about this one.
link |
00:08:12.260
And now I'm indulging in the luxury of doing research on this one.
link |
00:08:15.980
It's just so cool.
link |
00:08:17.660
I feel the time is ripe now for you transparently deepening our understanding of this.
link |
00:08:25.260
Just start exploring this one.
link |
00:08:26.420
Yeah, because I think a lot of people view intelligence as something mysterious that
link |
00:08:32.500
can only exist in biological organisms like us and therefore dismiss all talk about artificial
link |
00:08:38.340
general intelligence is science fiction.
link |
00:08:41.260
But from my perspective as a physicist, I am a blob of quirks and electrons moving around
link |
00:08:47.260
in a certain pattern and processing information in certain ways.
link |
00:08:50.180
And this is also a blob of quirks and electrons.
link |
00:08:53.580
I'm not smarter than the water bottle because I'm made of different kind of quirks.
link |
00:08:57.860
I'm made of up quirks and down quirks exact same kind as this.
link |
00:09:02.220
It's a there's no secret sauce, I think in me, it's it's all about the pattern of the
link |
00:09:07.020
information processing.
link |
00:09:08.820
And this means that there's no law of physics saying that we can't create technology, which
link |
00:09:16.020
can help us by being incredibly intelligent and help us crack mysteries that we couldn't.
link |
00:09:21.740
In other words, I think we've really only seen the tip of the intelligence iceberg so
link |
00:09:25.580
far.
link |
00:09:26.580
Yeah.
link |
00:09:27.580
So the perceptronium, yeah, so you coined this amazing term, it's a hypothetical state
link |
00:09:34.380
of matter, sort of thinking from a physics perspective, what is the kind of matter that
link |
00:09:39.420
can help as you're saying, subjective experience emerge, consciousness emerge.
link |
00:09:44.500
So how do you think about consciousness from this physics perspective?
link |
00:09:50.140
Very good question.
link |
00:09:51.980
So, again, I think many people have underestimated our ability to make progress on this by convincing
link |
00:10:03.060
themselves it's hopeless because somehow we're missing some ingredient that we need.
link |
00:10:08.500
There's some new consciousness particle or whatever.
link |
00:10:13.020
I happen to think that we're not missing anything and that it's not the interesting thing about
link |
00:10:19.660
consciousness that gives us this amazing subjective experience of colors and sounds and emotions
link |
00:10:25.900
and so on is rather something at the higher level about the patterns of information processing.
link |
00:10:32.300
And that's why I like to think about this idea of perceptronium.
link |
00:10:38.300
What does it mean for an arbitrary physical system to be conscious in terms of what its
link |
00:10:44.220
particles are doing or its information is doing?
link |
00:10:47.100
I hate carbon chauvinism, this attitude, you have to be made of carbon atoms to be smart
link |
00:10:52.300
or conscious.
link |
00:10:53.300
So something about the information processing that this kind of matter performs.
link |
00:10:58.180
Yeah, and you can see I have my favorite equations here describing various fundamental
link |
00:11:02.700
aspects of the world.
link |
00:11:04.660
I think one day, maybe someone who's watching this will come up with the equations that
link |
00:11:09.620
information processing has to satisfy to be conscious.
link |
00:11:12.140
And I'm quite convinced there is big discovery to be made there because let's face it, we
link |
00:11:19.580
know that some information processing is conscious because we are conscious.
link |
00:11:25.900
But we also know that a lot of information processing is not conscious.
link |
00:11:28.980
Most of the information processing happening in your brain right now is not conscious.
link |
00:11:32.980
There are like 10 megabytes per second coming in even just through your visual system.
link |
00:11:38.380
You're not conscious about your heartbeat regulation or most things.
link |
00:11:42.940
Even if I just ask you to read what it says here, you look at it and then, oh, now you
link |
00:11:47.300
know what it said.
link |
00:11:48.300
But you're not aware of how the computation actually happened.
link |
00:11:51.820
Your consciousness is like the CEO that got an email at the end with the final answer.
link |
00:11:57.020
So what is it that makes a difference?
link |
00:12:01.140
I think that's both a great science mystery, we're actually studying it a little bit in
link |
00:12:06.620
my lab here at MIT, but I also think it's a really urgent question to answer.
link |
00:12:12.260
For starters, I mean, if you're an emergency room doctor and you have an unresponsive patient
link |
00:12:16.460
coming in, wouldn't it be great if in addition to having a CT scanner, you had a conscious
link |
00:12:24.180
scanner that could figure out whether this person is actually having locked in syndrome
link |
00:12:30.780
or is actually comatose.
link |
00:12:33.580
And in the future, imagine if we build robots or the machine that we can have really good
link |
00:12:40.740
conversations with, I think it's very likely to happen, right?
link |
00:12:45.100
Wouldn't you want to know if your home helper robot is actually experiencing anything or
link |
00:12:50.020
just like a zombie?
link |
00:12:52.980
Would you prefer it?
link |
00:12:53.980
What would you prefer?
link |
00:12:54.980
Would you prefer that it's actually unconscious so that you don't have to feel guilty about
link |
00:12:57.820
switching it off or giving boring chores?
link |
00:12:59.980
What would you prefer?
link |
00:13:02.380
Well, certainly we would prefer, I would prefer the appearance of consciousness, but the question
link |
00:13:09.780
is whether the appearance of consciousness is different than consciousness itself.
link |
00:13:15.300
And sort of ask that as a question, do you think we need to understand what consciousness
link |
00:13:21.420
is, solve the hard problem of consciousness in order to build something like an AGI system?
link |
00:13:28.420
No.
link |
00:13:29.420
I don't think that.
link |
00:13:31.140
I think we will probably be able to build things even if we don't answer that question.
link |
00:13:36.220
But if we want to make sure that what happens is a good thing, we better solve it first.
link |
00:13:41.100
So it's a wonderful controversy you're raising there, where you have basically three points
link |
00:13:47.220
of view about the hard problem.
link |
00:13:50.220
There are two different points of view that both conclude that the hard problem of consciousness
link |
00:13:55.060
is BS.
link |
00:13:56.060
On one hand, you have some people like Daniel Dennett who say that consciousness is just
link |
00:14:01.100
BS because consciousness is the same thing as intelligence.
link |
00:14:05.140
There's no difference.
link |
00:14:06.580
So anything which acts conscious is conscious, just like we are.
link |
00:14:13.620
And then there are also a lot of people, including many top AI researchers I know, who say, oh,
link |
00:14:18.820
consciousness is just bullshit because of course machines can never be conscious.
link |
00:14:22.820
They're always going to skiddy zombies, never have to feel guilty about how you treat them.
link |
00:14:28.020
And then there's a third group of people, including Giulio Tononi, for example, and another, and
link |
00:14:35.380
Gustav Koch and a number of others, I would put myself on this middle camp who say that
link |
00:14:40.020
actually some information processing is conscious and some is not.
link |
00:14:44.260
So let's find the equation which can be used to determine which it is.
link |
00:14:49.380
And I think we've just been a little bit lazy kind of running away from this problem for
link |
00:14:53.980
a long time.
link |
00:14:55.100
It's been almost taboo to even mention the C word in a lot of circles because, but we
link |
00:15:01.940
should stop making excuses.
link |
00:15:03.700
This is a science question and there are ways we can even test any theory that makes predictions
link |
00:15:10.940
for this.
link |
00:15:12.140
And coming back to this helper robot, I mean, so you said you would want your helper robot
link |
00:15:16.060
to certainly act conscious and treat you, like have conversations with you and stuff.
link |
00:15:21.340
But wouldn't you, would you feel a little bit creeped out if you realized that it was
link |
00:15:24.860
just a glossed up tape recorder, you know, that was just zombie and it's a faking emotion?
link |
00:15:31.700
Would you prefer that it actually had an experience or would you prefer that it's actually not
link |
00:15:37.220
experiencing anything so you feel, you don't have to feel guilty about what you do to it?
link |
00:15:42.300
It's such a difficult question because, you know, it's like when you're in a relationship
link |
00:15:46.580
and you say, well, I love you and the other person said I love you back.
link |
00:15:49.860
It's like asking, well, do they really love you back or are they just saying they love
link |
00:15:53.860
you back?
link |
00:15:54.860
Don't you really want them to actually love you?
link |
00:15:59.620
It's hard to, it's hard to really know the difference between everything seeming like
link |
00:16:08.100
there's consciousness present, there's intelligence present, there's affection, passion, love,
link |
00:16:14.820
and it actually being there.
link |
00:16:16.180
I'm not sure.
link |
00:16:17.180
Do you have...
link |
00:16:18.180
Can I ask you a question about this?
link |
00:16:19.180
Yes.
link |
00:16:20.180
To make it a bit more pointed.
link |
00:16:21.180
So Mass General Hospital is right across the river, right?
link |
00:16:23.140
Suppose you're going in for a medical procedure and they're like, you know, for anesthesia
link |
00:16:29.180
what we're going to do is we're going to give you muscle relaxance so you won't be able
link |
00:16:32.180
to move and you're going to feel excruciating pain during the whole surgery but you won't
link |
00:16:36.140
be able to do anything about it.
link |
00:16:37.660
But then we're going to give you this drug that erases your memory of it.
link |
00:16:42.020
Would you be cool about that?
link |
00:16:45.420
What's the difference that you're conscious about it or not if there's no behavioral change,
link |
00:16:51.100
right?
link |
00:16:52.100
Right.
link |
00:16:53.100
And that's a really clear way to put it.
link |
00:16:55.220
Yeah, it feels like in that sense, experiencing it is a valuable quality.
link |
00:17:01.100
So actually being able to have subjective experiences, at least in that case, is valuable.
link |
00:17:09.140
And I think we humans have a little bit of a bad track record also of making these self
link |
00:17:14.060
serving arguments that other entities aren't conscious.
link |
00:17:17.940
You know, people often say, oh, these animals can't feel pain.
link |
00:17:20.700
Right.
link |
00:17:21.700
It's okay to boil lobsters because we asked them if it hurt and they didn't say anything.
link |
00:17:25.580
And now there was just a paper out saying lobsters did do feel pain when you boil them
link |
00:17:29.180
and they're banning it in Switzerland.
link |
00:17:31.180
And we did this with slaves too often and said, oh, they don't mind.
link |
00:17:36.300
They don't maybe aren't conscious or women don't have souls or whatever.
link |
00:17:41.180
So I'm a little bit nervous when I hear people just take as an axiom that machines can't
link |
00:17:46.540
have experience ever.
link |
00:17:48.900
I think this is just a really fascinating science question is what it is.
link |
00:17:52.500
Let's research it and try to figure out what it is that makes the difference between unconscious
link |
00:17:57.420
intelligent behavior and conscious intelligent behavior.
link |
00:18:01.220
So in terms of, so if you think of a Boston Dynamics human or robot being sort of with
link |
00:18:07.140
a broom being pushed around, it starts pushing on a consciousness question.
link |
00:18:13.420
So let me ask, do you think an AGI system, like a few neuroscientists believe needs to
link |
00:18:20.060
have a physical embodiment, needs to have a body or something like a body?
link |
00:18:25.860
No, I don't think so.
link |
00:18:28.340
You mean to have a conscious experience?
link |
00:18:30.620
To have consciousness.
link |
00:18:33.140
I do think it helps a lot to have a physical embodiment to learn the kind of things about
link |
00:18:37.860
the world that are important to us humans for sure.
link |
00:18:42.820
But I don't think the physical embodiment is necessary after you've learned it.
link |
00:18:47.460
Just have the experience.
link |
00:18:48.860
Think about it when you're dreaming, right?
link |
00:18:51.500
Your eyes are closed, you're not getting any sensory input, you're not behaving or moving
link |
00:18:55.500
in any way, but there's still an experience there, right?
link |
00:18:59.780
And so clearly the experience that you have when you see something cool in your dreams
link |
00:19:03.220
isn't coming from your eyes, it's just the information processing itself in your brain,
link |
00:19:08.660
which is that experience, right?
link |
00:19:11.100
But if I put it another way, I'll say because it comes from neuroscience is the reason you
link |
00:19:16.660
want to have a body and a physical, something like a physical system is because you want
link |
00:19:24.620
to be able to preserve something.
link |
00:19:27.100
In order to have a self, you could argue, you'd need to have some kind of embodiment
link |
00:19:35.740
of self to want to preserve.
link |
00:19:38.180
Well, now we're getting a little bit anthropomorphic, anthropomorphizing things, maybe talking about
link |
00:19:45.940
self preservation instincts.
link |
00:19:47.820
We are evolved organisms, right?
link |
00:19:50.700
So Darwinian evolution endowed us and other evolved organisms with self preservation instinct
link |
00:19:57.020
because those that didn't have those self preservation genes got cleaned out of the gene pool.
link |
00:20:03.100
But if you build an artificial general intelligence, the mind space that you can design is much,
link |
00:20:09.180
much larger than just a specific subset of minds that can evolve that have.
link |
00:20:14.500
So an AGI mind doesn't necessarily have to have any self preservation instinct.
link |
00:20:19.260
It also doesn't necessarily have to be so individualistic as us.
link |
00:20:24.100
Like imagine if you could just, first of all, we're also very afraid of death, you know,
link |
00:20:28.140
as opposed to you could back yourself up every five minutes and then your airplane is about
link |
00:20:32.180
to crash.
link |
00:20:33.180
You're like, shucks, I'm just, I'm going to lose the last five minutes of experiences
link |
00:20:37.340
since my last cloud backup, dang, you know, it's not as big a deal.
link |
00:20:41.580
Or if we could just copy experiences between our minds easily, like which we could easily
link |
00:20:47.380
do if we were silicon based, right?
link |
00:20:50.620
Then maybe we would feel a little bit more like a hive mind, actually, that maybe it's
link |
00:20:55.860
the, so, so there's, so I don't think we should take for granted at all that AGI will have
link |
00:21:01.220
to have any of those sort of competitive as alpha male instincts.
link |
00:21:06.820
Right.
link |
00:21:07.820
On the other hand, you know, this is really interesting because I think some people go
link |
00:21:12.820
too far and say, of course, we don't have to have any concerns either that advanced
link |
00:21:17.900
AI will have those instincts because we can build anything we want.
link |
00:21:22.700
That there's, there's a very nice set of arguments going back to Steve Omohandro and
link |
00:21:27.420
Nick Bostrom and others just pointing out that when we build machines, we normally build
link |
00:21:32.900
them with some kind of goal, you know, win this chess game, drive this car safely or
link |
00:21:37.700
whatever.
link |
00:21:38.700
And as soon as you put in a goal into machine, especially if it's kind of open ended goal
link |
00:21:42.540
and the machine is very intelligent, it'll break that down into a bunch of sub goals.
link |
00:21:48.460
And one of those goals will almost always be self preservation because if it breaks
link |
00:21:53.500
or dies in the process, it's not going to accomplish the goal, right?
link |
00:21:56.140
Like, suppose you just build a little, you have a little robot and you tell it to go
link |
00:21:59.540
down the store market here and, and get you some food, make you cook your Italian dinner,
link |
00:22:05.460
you know, and then someone mugs it and tries to break it on the way.
link |
00:22:09.540
That robot has an incentive to not get destroyed and defend itself for a runaway because otherwise
link |
00:22:15.380
it's going to fail and cooking your dinner.
link |
00:22:17.780
It's not afraid of death, but it really wants to complete the dinner cooking goal.
link |
00:22:22.940
So it will have a self preservation instinct.
link |
00:22:24.780
It will continue being a functional agent.
link |
00:22:26.820
Yeah.
link |
00:22:27.820
And, and, and similarly, if you give any kind of more ambitious goal to an AGI, it's very
link |
00:22:35.860
likely they want to acquire more resources so it can do that better.
link |
00:22:39.940
And it's exactly from those sort of sub goals that we might not have intended that some
link |
00:22:44.500
of the concerns about AGI safety come, you give it some goal that seems completely harmless.
link |
00:22:50.740
And then before you realize it, it's also trying to do these other things which you
link |
00:22:55.540
didn't want it to do and it's maybe smarter than us.
link |
00:22:59.220
So, so, and let me pause just because I am in a very kind of human centric way, see fear
link |
00:23:08.220
of death as a valuable motivator.
link |
00:23:11.900
So you don't think you think that's an artifact of evolution.
link |
00:23:17.220
So that's the kind of mind space evolution created that we're sort of almost obsessed
link |
00:23:21.980
about self preservation.
link |
00:23:22.980
Yeah.
link |
00:23:23.980
Some kind of genetic well, you don't think that's necessary to be afraid of death.
link |
00:23:29.500
So not just a kind of sub goal of self preservation just so you can keep doing the thing, but
link |
00:23:34.980
more fundamentally sort of have the finite thing like this ends for you at some point.
link |
00:23:42.980
Interesting.
link |
00:23:43.980
Do I think it's necessary for what precisely?
link |
00:23:47.500
For intelligence, but also for consciousness.
link |
00:23:51.020
So for those for both, do you think really like a finite death and the fear of it is
link |
00:23:58.220
important?
link |
00:24:01.020
So before I can answer, before we can agree on whether it's necessary for intelligence
link |
00:24:06.980
or for consciousness, we should be clear on how we define those two words because a lot
link |
00:24:10.660
are really smart people define them in very different ways.
link |
00:24:13.340
I was in this on this panel with AI experts and they couldn't, they couldn't agree on
link |
00:24:18.500
how to define intelligence even.
link |
00:24:20.180
So I define intelligence simply as the ability to accomplish complex goals.
link |
00:24:24.860
I like your broad definition because again, I don't want to be a carbon chauvinist.
link |
00:24:30.740
And in that case, no, certainly it doesn't require fear of death.
link |
00:24:36.580
I would say AlphaGo AlphaZero is quite intelligent.
link |
00:24:40.100
I don't think AlphaZero has any fear of being turned off because it doesn't understand the
link |
00:24:44.260
concept of even and similarly consciousness, I mean, you can certainly imagine a very simple
link |
00:24:52.180
kind of experience if certain plants have any kind of experience, I don't think they're
link |
00:24:57.660
very afraid of dying or there's nothing they can do about it anyway much.
link |
00:25:00.940
So there wasn't that much value and but more seriously, I think if you ask not just about
link |
00:25:08.420
being conscious, but maybe having what you would, we might call an exciting life for
link |
00:25:15.460
you for your passion and really appreciate the things, maybe there, somehow, maybe there
link |
00:25:23.300
perhaps it does help having a backdrop that, hey, it's finite, you know, let's make the
link |
00:25:29.180
most of this, let's live to the fullest.
link |
00:25:31.380
So if you knew you were going to just live forever, do you think you would change your
link |
00:25:36.220
career? Yeah, I mean, in some perspective, it would
link |
00:25:40.500
be an incredibly boring life living forever.
link |
00:25:44.020
So in the sort of loose, subjective terms that you said of something exciting and something
link |
00:25:49.740
in this that other humans would understand, I think, is yeah, it seems that the finiteness
link |
00:25:55.180
of it is important.
link |
00:25:56.660
Well, the good news I have for you then is based on what we understand about cosmology,
link |
00:26:02.420
things in our universe is probably finite, although big crunch or big or big, what's
link |
00:26:10.460
the extent of the infinite?
link |
00:26:11.460
Yeah, we could have a big chill or a big crunch or a big rip or death, the big snap or death
link |
00:26:16.820
bubbles.
link |
00:26:17.820
All of them are more than a billion years away.
link |
00:26:20.140
So we should we certainly have vastly more time than our ancestors thought, but still
link |
00:26:29.500
pretty hard to squeeze in an infinite number of compute cycles, even though there are some
link |
00:26:35.580
loopholes that just might be possible.
link |
00:26:37.820
But I think, you know, some people like to say that you should live as if you're about
link |
00:26:44.620
to you're going to die in five years or so, and that's sort of optimal.
link |
00:26:48.100
Maybe it's a good as some we should build our civilization as if it's all finite to
link |
00:26:54.740
be on the safe side.
link |
00:26:55.740
Right, exactly. So you mentioned in defining intelligence as the ability to solve complex
link |
00:27:02.020
goals.
link |
00:27:03.020
So where would you draw a line?
link |
00:27:04.940
How would you try to define human level intelligence and super human level intelligence?
link |
00:27:10.940
Where is consciousness part of that definition?
link |
00:27:13.380
No, consciousness does not come into this definition.
link |
00:27:16.860
So so I think of intelligence as it's a spectrum, but there are very many different kinds of
link |
00:27:21.580
goals you can have.
link |
00:27:22.580
You have a goal to be a good chess player, a good goal player, a good car driver, a good
link |
00:27:27.140
investor, good poet, etc.
link |
00:27:31.260
So intelligence that bind by its very nature, isn't something you can measure, but it's
link |
00:27:35.740
one number, some overall goodness, no, no, there are some people who are more better
link |
00:27:39.900
at this, some people are better at that.
link |
00:27:42.540
Right now we have machines that are much better than us at some very narrow tasks like multiplying
link |
00:27:48.380
large numbers fast, memorizing large databases, playing chess, playing go, soon driving cars.
link |
00:27:57.620
But there's still no machine that can match a human child in general intelligence.
link |
00:28:03.340
But artificial general intelligence, AGI, the name of your course, of course, that
link |
00:28:08.420
is by its very definition, the quest to build a machine that can do everything as well as
link |
00:28:16.460
we can.
link |
00:28:17.460
Up to the old Holy Grail of AI from back to its inception in the 60s.
link |
00:28:24.060
If that ever happens, of course, I think it's going to be the biggest transition in the
link |
00:28:27.500
history of life on Earth, but it doesn't necessarily have to wait the big impact until machines
link |
00:28:33.860
are better than us at knitting.
link |
00:28:35.780
The really big change doesn't come exactly at the moment they're better than us at everything.
link |
00:28:41.940
The really big change comes, first, their big change is when they start becoming better
link |
00:28:45.820
at us at doing most of the jobs that we do, because that takes away much of the demand
link |
00:28:51.140
for human labor.
link |
00:28:53.380
And then the really warping change comes when they become better than us at AI research.
link |
00:29:01.300
Because right now, the time scale of AI research is limited by the human research and development
link |
00:29:07.900
cycle of years, typically, along the take from one release of some software or iPhone
link |
00:29:14.100
or whatever to the next.
link |
00:29:16.300
But once Google can replace 40,000 engineers by 40,000 equivalent pieces of software or
link |
00:29:25.820
whatever, then there's no reason that has to be years.
link |
00:29:29.660
It can be, in principle, much faster.
link |
00:29:32.020
And the time scale of future progress in AI and all of science and technology will be
link |
00:29:38.900
driven by machines, not humans.
link |
00:29:40.980
So it's this simple point, which gives right this incredibly fun controversy about whether
link |
00:29:49.660
there can be intelligence explosion, so called singularity, as Werner Winge called it.
link |
00:29:54.540
The idea, as articulated by I.J. Good, is obviously way back fifties, but you can see
link |
00:30:00.060
Alan Turing and others thought about it even earlier.
link |
00:30:07.220
You asked me what exactly what I define human level intelligence.
link |
00:30:12.980
So the glib answer is just to say something which is better than us at all cognitive tasks
link |
00:30:18.540
or better than any human at all cognitive tasks.
link |
00:30:21.980
But the really interesting bar, I think, goes a little bit lower than that, actually.
link |
00:30:25.900
It's when they're better than us at AI programming and general learning so that they can, if
link |
00:30:33.260
they want to, get better than us at anything by just starting out.
link |
00:30:37.340
So there better is a key word and better is towards this kind of spectrum of the complexity
link |
00:30:43.100
of goals it's able to accomplish.
link |
00:30:45.740
So another way to, and that's certainly a very clear definition of human love.
link |
00:30:53.060
So there's, it's almost like a sea that's rising, you can do more and more and more
link |
00:30:56.300
things.
link |
00:30:57.300
It's actually a graphic that you show, it's really nice way to put it.
link |
00:30:59.900
So there's some peaks and there's an ocean level elevating and you solve more and more
link |
00:31:04.340
problems.
link |
00:31:05.340
But, you know, just kind of to take a pause and we took a bunch of questions and a lot
link |
00:31:09.220
of social networks and a bunch of people asked a sort of a slightly different direction
link |
00:31:14.380
on creativity and on things that perhaps aren't a peak.
link |
00:31:22.260
It's, you know, human beings are flawed and perhaps better means having being having contradiction
link |
00:31:28.620
being flawed in some way.
link |
00:31:30.260
So let me sort of, yeah, start and start easy, first of all.
link |
00:31:34.980
So you have a lot of cool equations.
link |
00:31:36.620
Let me ask, what's your favorite equation, first of all?
link |
00:31:39.660
I know they're all like your children, but which one is that?
link |
00:31:43.580
This is the Shreddinger equation, it's the master key of quantum mechanics of the micro
link |
00:31:49.060
world.
link |
00:31:50.060
So this equation can take everything to do with atoms and all the fuels and all the
link |
00:31:55.340
way up to… Yeah, so, okay, so quantum mechanics is certainly a beautiful mysterious formulation
link |
00:32:04.020
of our world.
link |
00:32:05.020
So I'd like to sort of ask you, just as an example, it perhaps doesn't have the same
link |
00:32:10.740
beauty as physics does, but in mathematics abstract, the Andrew Wiles who proved the
link |
00:32:17.420
Fermat's last theory.
link |
00:32:19.460
So he just saw this recently and it kind of caught my eye a little bit.
link |
00:32:24.180
This is 358 years after it was conjectured.
link |
00:32:27.980
So this very simple formulation, everybody tried to prove it, everybody failed.
link |
00:32:32.940
And so here's this guy comes along and eventually proves it and then fails to prove it and then
link |
00:32:38.820
proves it again in 94.
link |
00:32:41.340
And he said like the moment when everything connected into place, in an interview he said
link |
00:32:45.940
it was so indescribably beautiful.
link |
00:32:47.980
That moment when you finally realize the connecting piece of two conjectures, he said it was so
link |
00:32:53.580
indescribably beautiful, it was so simple and so elegant.
link |
00:32:56.940
I couldn't understand how I'd missed it and I just stared at it in disbelief for 20
link |
00:33:01.540
minutes.
link |
00:33:02.540
Then during the day I walked around the department and I keep coming back to my desk looking
link |
00:33:08.100
to see if it was still there.
link |
00:33:09.820
It was still there.
link |
00:33:10.820
I couldn't contain myself.
link |
00:33:11.820
I was so excited.
link |
00:33:12.820
It was the most important moment of my working life.
link |
00:33:16.180
Nothing I ever do again will mean as much.
link |
00:33:18.940
So that particular moment and it kind of made me think of what would it take?
link |
00:33:24.860
And I think we have all been there at small levels.
link |
00:33:28.380
Maybe let me ask, have you had a moment like that in your life where you just had an idea
link |
00:33:34.820
it's like, wow, yes.
link |
00:33:40.060
I wouldn't mention myself in the same breath as Andrew Wiles, but I certainly had a number
link |
00:33:44.700
of aha moments when I realized something very cool about physics just completely made
link |
00:33:54.820
my head explode.
link |
00:33:55.820
In fact, some of my favorite discoveries I made later, I later realized that they had
link |
00:33:59.580
been discovered earlier by someone who's sometimes got quite famous for it.
link |
00:34:03.340
So there's too late for me to even publish it, but that doesn't diminish in any way.
link |
00:34:07.460
The emotional experience you have when you realize it like, wow.
link |
00:34:12.340
So what would it take in that moment, that wow, that was yours in that moment?
link |
00:34:17.460
So what do you think it takes for an intelligent system, an AGI system, an AI system to have
link |
00:34:23.420
a moment like that?
link |
00:34:24.980
It's a tricky question because there are actually two parts to it, right?
link |
00:34:29.420
One of them is, can it accomplish that proof, can it prove that you can never write A to
link |
00:34:37.260
the N plus B to the N equals 3 to the N for all integers, etc., etc., when N is bigger
link |
00:34:46.420
than 2.
link |
00:34:49.420
That's simply the question about intelligence.
link |
00:34:51.580
Can you build machines that are that intelligent?
link |
00:34:54.420
And I think by the time we get a machine that can independently come up with that level
link |
00:34:59.860
of proofs, probably quite close to AGI.
link |
00:35:03.460
But the second question is a question about consciousness.
link |
00:35:07.860
When will we, how likely is it that such a machine would actually have any experience
link |
00:35:13.060
at all as opposed to just being like a zombie?
link |
00:35:16.500
And would we expect it to have some sort of emotional response to this or anything at
link |
00:35:22.940
all akin to human emotion where when it accomplishes its machine goal, it views it as something
link |
00:35:31.140
very positive and sublime and deeply meaningful.
link |
00:35:39.260
I would certainly hope that if in the future we do create machines that are our peers or
link |
00:35:45.260
even our descendants, I would certainly hope that they do have this sort of sublime appreciation
link |
00:35:53.700
of life in a way, my absolutely worst nightmare would be that at some point in the future,
link |
00:36:06.020
the distant future, maybe our cosmos is teeming with all this post biological life, doing
link |
00:36:10.620
all the seemingly cool stuff.
link |
00:36:13.180
And maybe the last humans by the time our species eventually fizzles out will be like,
link |
00:36:20.660
well, that's okay, because we're so proud of our descendants here and look, my worst
link |
00:36:26.140
nightmare is that we haven't solved the consciousness problem.
link |
00:36:30.580
And we haven't realized that these are all the zombies, they're not aware of anything
link |
00:36:34.100
anymore than a tape recorder, as in any kind of experience.
link |
00:36:37.900
So the whole thing has just become a play for empty benches.
link |
00:36:41.660
That would be like the ultimate zombie apocalypse to me.
link |
00:36:44.700
So I would much rather, in that case, that we have these beings which can really appreciate
link |
00:36:52.900
how amazing it is.
link |
00:36:57.060
And in that picture, what would be the role of creativity, what a few people ask about
link |
00:37:02.260
creativity?
link |
00:37:03.260
Yeah.
link |
00:37:04.260
And do you think, when you think about intelligence, I mean, certainly the story you told at the
link |
00:37:08.700
beginning of your book involved, you know, creating movies and so on, sort of making
link |
00:37:14.100
money, you know, you can make a lot of money in our modern world with music and movies.
link |
00:37:18.580
So if you are an intelligent system, you may want to get good at that.
link |
00:37:23.100
But that's not necessarily what I mean by creativity.
link |
00:37:26.300
Is it important on that complex goals where the sea is rising for there to be something
link |
00:37:32.620
creative, or am I being very human centric and thinking creativity somehow special relative
link |
00:37:39.940
to intelligence?
link |
00:37:41.940
My hunch is that we should think of creativity simply as an aspect of intelligence.
link |
00:37:50.940
And we have to be very careful with human vanity.
link |
00:37:57.820
We have this tendency to very often want to say, as soon as machines can do something,
link |
00:38:01.540
we try to diminish it and say, oh, but that's not like real intelligence, you know, is
link |
00:38:05.980
it not creative or this or that, the other thing, if we ask ourselves to write down a
link |
00:38:12.620
definition of what we actually mean by being creative, what we mean by Andrew Wiles, what
link |
00:38:18.500
he did there, for example, don't we often mean that someone takes a very unexpected
link |
00:38:23.660
leap?
link |
00:38:26.060
It's not like taking 573 and multiplying by 224 by just a step of straightforward cookbook
link |
00:38:33.740
like rules, right?
link |
00:38:36.500
You can maybe make a connection between two things that people have never thought was
link |
00:38:40.660
connected.
link |
00:38:41.660
It's very surprising.
link |
00:38:42.660
Something like that.
link |
00:38:44.300
I think this is an aspect of intelligence, and this is actually one of the most important
link |
00:38:50.660
aspects of it.
link |
00:38:53.260
Maybe the reason we humans tend to be better at it than traditional computers is because
link |
00:38:57.940
it's something that comes more naturally if you're a neural network than if you're a
link |
00:39:02.020
traditional logic gates based computer machine.
link |
00:39:05.820
We physically have all these connections, and if you activate here, activate here, activate
link |
00:39:11.900
here, it ping, you know, my hunch is that if we ever build a machine where you could
link |
00:39:20.980
just give it the task, hey, hey, you say, hey, you know, I just realized I want to travel
link |
00:39:31.020
around the world instead this month.
link |
00:39:32.380
Can you teach my AGI course for me?
link |
00:39:34.700
And it's like, okay, I'll do it.
link |
00:39:36.100
And it does everything that you would have done and it improvises and stuff.
link |
00:39:39.860
That would in my mind involve a lot of creativity.
link |
00:39:42.860
Yeah, so it's actually a beautiful way to put it.
link |
00:39:45.660
I think we do try to grasp at the definition of intelligence as everything we don't understand
link |
00:39:54.540
how to build.
link |
00:39:57.580
So we as humans try to find things that we have and machines don't have, and maybe creativity
link |
00:40:02.180
is just one of the things, one of the words we used to describe that.
link |
00:40:05.940
That's a really interesting way to put it.
link |
00:40:06.940
I don't think we need to be that defensive.
link |
00:40:09.820
I don't think anything good comes out of saying, we're somehow special, you know, it's
link |
00:40:14.700
very wise, there are many examples in history of where trying to pretend they were somehow
link |
00:40:27.540
superior to all other intelligent beings has led to pretty bad results, right?
link |
00:40:36.220
Nazi Germany, they said that they were somehow superior to other people.
link |
00:40:39.700
Today, we still do a lot of cruelty to animals by saying they were so superior somehow on
link |
00:40:44.580
the other, they can't feel pain, slavery was justified by the same kind of really weak
link |
00:40:50.500
arguments.
link |
00:40:52.420
And I don't think if we actually go ahead and build artificial general intelligence,
link |
00:40:58.700
it can do things better than us.
link |
00:41:01.100
I don't think we should try to found our self worth on some sort of bogus claims of superiority
link |
00:41:08.980
in terms of our intelligence.
link |
00:41:11.940
I think we should instead find our calling and the meaning of life from the experiences
link |
00:41:21.780
that we have.
link |
00:41:22.780
Right.
link |
00:41:23.780
You know, I can have very meaningful experiences even if there are other people who are smarter
link |
00:41:30.260
than me, you know, when I go to faculty meeting here and I was talking about something and
link |
00:41:35.860
then I certainly realized, oh, he has an old prize, he has an old prize, he has an old
link |
00:41:39.420
prize.
link |
00:41:40.420
Yeah.
link |
00:41:41.420
You know, it doesn't make me enjoy life any less or enjoy talking to those people less.
link |
00:41:47.660
Of course not.
link |
00:41:49.780
And contrary to that, I feel very honored and privileged to get to interact with other
link |
00:41:57.420
very intelligent beings that are better than me and a lot of stuff.
link |
00:42:00.820
So I don't think there's any reason why we can't have the same approach with intelligent
link |
00:42:05.420
machines.
link |
00:42:06.420
That's a really interesting, so people don't often think about that.
link |
00:42:08.900
They think about if there's machines that are more intelligent, you naturally think
link |
00:42:14.380
that that's not going to be a beneficial type of intelligence.
link |
00:42:19.100
You don't realize it could be, you know, like peers with no ball prizes that would be just
link |
00:42:24.060
fun to talk with.
link |
00:42:25.060
And they might be clever about certain topics and you can have fun having a few drinks with
link |
00:42:30.580
them.
link |
00:42:31.580
Well, also, you know, another example we can all relate to why it doesn't have to be a
link |
00:42:38.620
terrible thing to be impressed, the presence of people who are even smarter than us all
link |
00:42:42.580
around is when you and I were both two years old, I mean, our parents were much more intelligent
link |
00:42:47.980
than us.
link |
00:42:48.980
Right.
link |
00:42:49.980
Worked out okay.
link |
00:42:50.980
Because their goals were aligned with our goals.
link |
00:42:54.140
And that I think is really the number one key issue we have to solve if we value align
link |
00:43:01.380
the value alignment problem exactly because people who see too many Hollywood movies with
link |
00:43:07.380
lousy science fiction plot lines, they worry about the wrong thing, right?
link |
00:43:12.260
They worry about some machine suddenly turning evil.
link |
00:43:16.500
It's not malice that we should that is the concern.
link |
00:43:21.500
It's competence.
link |
00:43:23.000
By definition, intelligence makes you makes you very competent if you have a more intelligent
link |
00:43:29.580
goal playing machine computer playing as a less intelligent one and when we define intelligence
link |
00:43:35.300
as the ability to accomplish go winning, right?
link |
00:43:37.740
It's going to be the more intelligent one that wins.
link |
00:43:40.780
And if you have a human and then you have an AGI that's more intelligent in all ways
link |
00:43:47.860
and they have different goals, guess who's going to get their way, right?
link |
00:43:50.500
So I was just reading about this particular rhinoceros species that was driven extinct
link |
00:43:58.060
just a few years ago.
link |
00:43:59.060
Alan Bummer is looking at this cute picture of a mommy rhinoceros with its child, you
link |
00:44:05.740
know, and why did we humans drive it to extinction?
link |
00:44:09.140
It wasn't because we were evil rhino haters as a whole.
link |
00:44:12.860
It was just because we our goals weren't aligned with those of the rhinoceros and it didn't
link |
00:44:16.380
work out so well for the rhinoceros because we were more intelligent, right?
link |
00:44:19.660
So I think it's just so important that if we ever do build AGI before we unleash anything,
link |
00:44:27.220
we have to make sure that it learns to understand our goals, that it adopts our goals and retains
link |
00:44:37.380
those goals.
link |
00:44:38.380
So the cool interesting problem there is being able, us as human beings, trying to formulate
link |
00:44:45.740
our values.
link |
00:44:47.240
So you know, you could think of the United States Constitution as a way that people sat
link |
00:44:52.540
down at the time a bunch of white men, which is a good example, I should say.
link |
00:44:59.780
They formulated the goals for this country and a lot of people agree that those goals
link |
00:45:03.460
actually held up pretty well.
link |
00:45:05.540
It's an interesting formulation of values and failed miserably in other ways.
link |
00:45:09.600
So for the value alignment problem and the solution to it, we have to be able to put
link |
00:45:15.500
on paper or in a program, human values, how difficult do you think that is?
link |
00:45:23.420
Very.
link |
00:45:24.420
But it's so important.
link |
00:45:25.980
We really have to give it our best and it's difficult for two separate reasons.
link |
00:45:30.340
There's the technical value alignment problem of figuring out just how to make machines
link |
00:45:37.660
understand our goals, adopt them and retain them.
link |
00:45:40.660
And then there's the separate part of it, the philosophical part, whose values anyway.
link |
00:45:46.140
And since we, it's not like we have any great consensus on this planet on values, what mechanism
link |
00:45:51.700
should we create then to aggregate and decide, okay, what's a good compromise?
link |
00:45:56.780
That second discussion can't just be left the tech nerds like myself, right?
link |
00:46:01.260
That's right.
link |
00:46:02.260
And if we refuse to talk about it and then AGI gets built, who's going to be actually
link |
00:46:06.820
making the decision about whose values, it's going to be a bunch of dudes in some tech
link |
00:46:10.660
company, right?
link |
00:46:12.380
And are they necessarily so representative of all of humankind that we want to just
link |
00:46:18.420
endorse it to them?
link |
00:46:19.580
Are they even uniquely qualified to speak to future human happiness just because they're
link |
00:46:25.220
good at programming AI?
link |
00:46:26.460
I'd much rather have this be a really inclusive conversation.
link |
00:46:30.380
But do you think it's possible?
link |
00:46:32.700
You create a beautiful vision that includes sort of the diversity, cultural diversity
link |
00:46:38.820
and various perspectives on discussing rights, freedoms, human dignity.
link |
00:46:43.900
But how hard is it to come to that consensus?
link |
00:46:46.620
Do you think it's certainly a really important thing that we should all try to do, but do
link |
00:46:52.140
you think it's feasible?
link |
00:46:54.460
I think there's no better way to guarantee failure than to refuse to talk about it or
link |
00:47:01.660
refuse to try.
link |
00:47:02.980
And I also think it's a really bad strategy to say, okay, let's first have a discussion
link |
00:47:08.060
for a long time.
link |
00:47:09.060
And then once we reach complete consensus, then we'll try to load it into some machine.
link |
00:47:13.540
No, we shouldn't let perfect be the enemy of good.
link |
00:47:16.980
Instead, we should start with the kindergarten ethics that pretty much everybody agrees on
link |
00:47:22.140
and put that into our machines now.
link |
00:47:24.580
We're not doing that even.
link |
00:47:26.100
Look at anyone who builds a passenger aircraft wants it to never under any circumstances
link |
00:47:32.980
fly into a building or mountain, right?
link |
00:47:35.900
Yet the September 11 hijackers were able to do that.
link |
00:47:38.860
And even more embarrassingly, Andreas Lubitz, this depressed German wings pilot, when he
link |
00:47:44.220
flew his passenger jet into the Alps, killing over 100 people, he just told the autopilot
link |
00:47:50.220
to do it.
link |
00:47:51.220
He told the freaking computer to change the altitude to 100 meters.
link |
00:47:55.140
And even though it had the GPS maps, everything, the computer was like, okay, no, so we should
link |
00:48:01.820
take those very basic values, though, where the problem is not that we don't agree.
link |
00:48:07.300
The problem is just we've been too lazy to try to put it into our machines and make sure
link |
00:48:12.460
that from now on, airplanes will just, which all have computers in them, but we'll just
link |
00:48:17.460
never just refuse to do something like that.
link |
00:48:19.820
We go into safe mode, maybe lock the cockpit door, go to the nearest airport, and there's
link |
00:48:25.580
so much other technology in our world as well now where it's really coming quite timely
link |
00:48:31.340
to put in some sort of very basic values like this.
link |
00:48:34.300
Even in cars, we've had enough vehicle terrorism attacks by now where people have driven trucks
link |
00:48:41.460
and vans into pedestrians that it's not at all a crazy idea to just have that hardwired
link |
00:48:47.300
into the car, because yeah, there are a lot of, there's always going to be people who
link |
00:48:51.420
for some reason want to harm others, but most of those people don't have the technical
link |
00:48:55.620
expertise to figure out how to work around something like that.
link |
00:48:58.620
So if the car just won't do it, it helps.
link |
00:49:01.780
So let's start there.
link |
00:49:02.940
So there's a lot of, that's a great point.
link |
00:49:05.020
So not chasing perfect.
link |
00:49:06.900
There's a lot of things that most of the world agrees on.
link |
00:49:10.780
Yeah, let's start there.
link |
00:49:11.940
Let's start there.
link |
00:49:12.940
And then once we start there, we'll also get into the habit of having these kind of conversations
link |
00:49:18.140
about, okay, what else should we put in here and have these discussions?
link |
00:49:21.940
This should be a gradual process then.
link |
00:49:24.100
Great.
link |
00:49:25.100
So, but that also means describing these things and describing it to a machine.
link |
00:49:31.380
So one thing, we had a few conversations with Steven Wolfram.
link |
00:49:35.620
I'm not sure if you're familiar with Steven Wolfram.
link |
00:49:37.140
Oh yeah, I know him quite well.
link |
00:49:38.500
So he has, you know, he works with a bunch of things, but you know, cellular automata,
link |
00:49:43.380
these simple computable things, these computation systems.
link |
00:49:47.660
And he kind of mentioned that, you know, we probably have already within these systems
link |
00:49:52.380
already something that's AGI, meaning like we just don't know it because we can't talk
link |
00:49:59.580
to it.
link |
00:50:00.580
So if you give me this chance to try it, to try to at least form a question out of this,
link |
00:50:06.380
because I think it's an interesting idea to think that we can have intelligent systems,
link |
00:50:12.780
but we don't know how to describe something to them and they can't communicate with us.
link |
00:50:17.260
I know you're doing a little bit of work in explainable AI, trying to get AI to explain
link |
00:50:21.220
itself.
link |
00:50:22.220
So what are your thoughts of natural language processing or some kind of other communication?
link |
00:50:28.340
How does the AI explain something to us?
link |
00:50:30.220
How do we explain something to it, to machines?
link |
00:50:33.740
Or you think of it differently?
link |
00:50:35.420
So there are two separate parts to your question there.
link |
00:50:40.100
One of them has to do with communication, which is super interesting and I'll get to
link |
00:50:43.900
that in a sec.
link |
00:50:44.900
The other is whether we already have AGI, we just haven't noticed it.
link |
00:50:50.100
There, I beg to differ.
link |
00:50:54.340
And don't think there's anything in any cellular automaton or anything or the internet itself
link |
00:50:58.420
or whatever that has artificial general intelligence in that it didn't really do exactly everything
link |
00:51:05.400
we humans can do better.
link |
00:51:06.980
I think the day that happens, when that happens, we will very soon notice and we'll probably
link |
00:51:14.100
notice even before because in a very, very big way.
link |
00:51:17.980
For the second part though.
link |
00:51:18.980
Can I just, sorry.
link |
00:51:20.700
Because you have this beautiful way to formulate in consciousness as information processing
link |
00:51:30.260
and you can think of intelligence and information processing and you can think of the entire
link |
00:51:33.740
universe.
link |
00:51:34.740
These particles and these systems roaming around that have this information processing
link |
00:51:40.220
power, you don't think there is something with the power to process information in the
link |
00:51:47.500
way that we human beings do that's out there that needs to be sort of connected to.
link |
00:51:55.460
It seems a little bit philosophical perhaps, but there's something compelling to the idea
link |
00:51:59.980
that the power is already there, the focus should be more on being able to communicate
link |
00:52:06.100
with it.
link |
00:52:07.100
Well, I agree that in a certain sense, the hardware processing power is already out there
link |
00:52:15.340
because our universe itself can think of it as being a computer already.
link |
00:52:21.180
It's constantly computing what water waves, how it devolved the water waves and the river
link |
00:52:25.540
Charles and how to move the air molecules around that Seth Lloyd has pointed out.
link |
00:52:29.860
My colleague here that you can even in a very rigorous way think of our entire universe
link |
00:52:33.940
is just being a quantum computer.
link |
00:52:35.660
It's pretty clear that our universe supports this amazing processing power because you
link |
00:52:40.900
can even within this physics computer that we live in, we can even build actual laptops
link |
00:52:46.580
and stuff.
link |
00:52:47.580
So clearly the power is there.
link |
00:52:49.140
It's just that most of the compute power that nature has, it's in my opinion kind of wasting
link |
00:52:53.420
on boring stuff like simulating yet another ocean wave somewhere where no one is even
link |
00:52:57.140
looking.
link |
00:52:58.140
So in a sense, what life does, what we are doing when we build computers is we're rechanneling
link |
00:53:03.820
all this compute that nature is doing anyway into doing things that are more interesting
link |
00:53:09.380
than just yet another ocean wave and do something cool here.
link |
00:53:14.220
So the raw hardware power is there for sure, and even just computing what's going to happen
link |
00:53:21.100
for the next five seconds in this water ball, you know, it takes a ridiculous amount of
link |
00:53:25.540
compute if you do it on a human computer.
link |
00:53:28.060
This water ball just did it.
link |
00:53:30.040
But that does not mean that this water ball has AGI and this because AGI means it should
link |
00:53:36.020
also be able to like I've written my book done this interview.
link |
00:53:40.300
And I don't think it's just communication problems.
link |
00:53:42.100
I don't think it can do it.
link |
00:53:47.020
So Buddhists say when they watch the water and that there is some beauty, that there's
link |
00:53:51.780
some depth and beauty in nature that they can communicate with.
link |
00:53:55.380
Communication is also very important because I mean, look, part of my job is being a teacher
link |
00:54:01.180
and I know some very intelligent professors even who just have a better hard time communicating.
link |
00:54:09.940
They come up with all these brilliant ideas, but to communicate with somebody else, you
link |
00:54:14.620
have to also be able to simulate their own mind.
link |
00:54:17.140
Yes.
link |
00:54:18.140
And build well enough and understand that model of their mind that you can say things
link |
00:54:22.020
that they will understand.
link |
00:54:24.500
And that's quite difficult.
link |
00:54:26.700
And that's why today it's so frustrating if you have a computer that makes some cancer
link |
00:54:31.620
diagnosis and you ask it, well, why are you saying I should have a surgery?
link |
00:54:36.260
And if you don't want to reply, I was trained on five terabytes of data and this is my diagnosis
link |
00:54:43.620
boop, boop, beep, beep, doesn't really instill a lot of confidence, right?
link |
00:54:49.220
So I think we have a lot of work to do on communication there.
link |
00:54:54.420
So what kind of, I think you're doing a little bit of work in explainable AI.
link |
00:54:59.380
What do you think are the most promising avenues?
link |
00:55:01.340
Is it mostly about sort of the Alexa problem of natural language processing of being able
link |
00:55:07.100
to actually use human interpretable methods of communication?
link |
00:55:13.220
So being able to talk to a system and talk back to you, or is there some more fundamental
link |
00:55:17.500
problems to be solved?
link |
00:55:18.500
I think it's all of the above.
link |
00:55:21.180
The natural language processing is obviously important, but there are also more nerdy fundamental
link |
00:55:27.180
problems.
link |
00:55:28.180
Like if you take, you play chess, Russian, I have to, when did you learn Russian?
link |
00:55:39.180
I speak Russian very poorly, but I bought a book, teach yourself Russian, I read a lot,
link |
00:55:45.700
but it was very difficult.
link |
00:55:47.700
Wow.
link |
00:55:48.700
That's why I speak so poorly.
link |
00:55:49.700
How many languages do you know?
link |
00:55:51.700
Wow.
link |
00:55:52.700
That's really impressive.
link |
00:55:53.700
I don't know.
link |
00:55:54.700
My wife has some calculations, but my point was, if you played chess, have you looked
link |
00:55:58.740
at the AlphaZero games?
link |
00:56:00.260
Yeah.
link |
00:56:01.260
Oh, the actual games now.
link |
00:56:02.260
Check it out.
link |
00:56:03.260
Some of them are just mind blowing, really beautiful.
link |
00:56:09.900
If you ask, how did it do that?
link |
00:56:12.460
You got that.
link |
00:56:14.500
Talk to Demis Osabis, others from DeepMind, all they'll ultimately be able to give you
link |
00:56:20.540
is big tables of numbers, matrices that define the neural network, and you can stare at these
link |
00:56:26.940
tables numbers till your face turned blue, and you're not going to understand much about
link |
00:56:32.980
why it made that move.
link |
00:56:35.860
Even if you have a natural language processing that can tell you in human language about,
link |
00:56:40.540
oh, five, seven, point two, eight, still not going to really help.
link |
00:56:44.180
I think there's a whole spectrum of fun challenges there involved in taking computation that
link |
00:56:50.660
does intelligent things and transforming it into something equally good, equally intelligent,
link |
00:56:59.940
but that's more understandable.
link |
00:57:02.060
I think that's really valuable because I think as we put machines in charge of ever more
link |
00:57:08.180
infrastructure in our world, the power grid, the trading on the stock market, weapon systems,
link |
00:57:13.540
and so on, it's absolutely crucial that we can trust these AIs that do all we want and
link |
00:57:19.620
trust really comes from understanding in a very fundamental way.
link |
00:57:25.860
That's why I'm working on this, because I think the more if we're going to have some
link |
00:57:29.940
hope of ensuring that machines have adopted our goals and that they're going to retain
link |
00:57:34.700
them, that kind of trust, I think, needs to be based on things you can actually understand,
link |
00:57:41.260
preferably even improve theorems on, even with a self driving car, right?
link |
00:57:47.140
If someone just tells you it's been trained on tons of data and never crashed, it's less
link |
00:57:51.020
reassuring than if someone actually has a proof.
link |
00:57:54.460
Maybe it's a computer verified proof, but still it says that under no circumstances
link |
00:57:58.820
is this car just going to swerve into oncoming traffic.
link |
00:58:02.420
And that kind of information helps build trust and helps build the alignment of goals, at
link |
00:58:09.460
least awareness that your goals, your values are aligned.
link |
00:58:12.300
And I think even in the very short term, if you look at how today, this absolutely pathetic
link |
00:58:17.620
state of cybersecurity that we have, where is it, 3 billion Yahoo accounts are packed
link |
00:58:25.980
and almost every American's credit card and so on, you know, why is this happening?
link |
00:58:34.300
It's ultimately happening because we have software that nobody fully understood how
link |
00:58:39.940
it worked.
link |
00:58:41.460
That's why the bugs hadn't been found, right?
link |
00:58:45.100
And I think AI can be used very effectively for offense for hacking, but it can also be
link |
00:58:50.340
used for defense, hopefully, automating verifiability and creating systems that are built in different
link |
00:59:00.580
ways so you can actually prove things about them.
link |
00:59:03.140
And it's important.
link |
00:59:05.460
So speaking of software that nobody understands how it works, of course, a bunch of people
link |
00:59:09.740
ask about your paper about your thoughts of why does deep and cheap learning work so well?
link |
00:59:14.820
That's the paper, but what are your thoughts on deep learning, these kind of simplified
link |
00:59:19.280
models of our own brains that have been able to do some successful perception work, pattern
link |
00:59:26.620
recognition work, and now with AlphaZero and so on, do some clever things?
link |
00:59:30.940
What are your thoughts about the promise limitations of this piece?
link |
00:59:35.740
Great.
link |
00:59:37.140
I think there are a number of very important insights, very important lessons we can always
link |
00:59:44.300
draw from these kind of successes.
link |
00:59:47.340
One of them is when you look at the human brain, you see it's very complicated, a tenth
link |
00:59:50.460
of 11 neurons, and there are all these different kinds of neurons, and yada yada, and there's
link |
00:59:54.140
been this long debate about whether the fact that we have dozens of different kinds is
link |
00:59:57.980
actually necessary for intelligence.
link |
01:00:01.580
We can now, I think, quite convincingly answer that question of no, it's enough to have just
link |
01:00:06.500
one kind.
link |
01:00:07.500
If you look under the hood of AlphaZero, there's only one kind of neuron, and it's ridiculously
link |
01:00:11.780
simple, a simple mathematical thing.
link |
01:00:15.060
So it's just like in physics, if you have a gas with waves in it, it's not the detailed
link |
01:00:21.380
nature of the molecules that matter.
link |
01:00:24.380
It's the collective behavior, somehow.
link |
01:00:27.060
Similarly, it's this higher level structure of the network that matters, not that you
link |
01:00:33.060
have 20 kinds of neurons.
link |
01:00:34.060
I think our brain is such a complicated mess because it wasn't evolved just to be intelligent,
link |
01:00:41.740
it was evolved to also be self assembling, and self repairing, and evolutionarily attainable.
link |
01:00:51.740
And patches and so on.
link |
01:00:53.660
So I think it's pretty, my hunch is that we're going to understand how to build AGI before
link |
01:00:58.700
we fully understand how our brains work.
link |
01:01:01.060
Just like we understood how to build flying machines long before we were able to build
link |
01:01:06.260
a mechanical bird.
link |
01:01:07.260
Yeah, that's right.
link |
01:01:08.260
You've given the example of mechanical birds and airplanes, and airplanes do a pretty good
link |
01:01:15.300
job of flying without really mimicking bird flight.
link |
01:01:18.620
And even now, after 100 years later, did you see the TED talk with this German group of
link |
01:01:23.180
mechanical birds?
link |
01:01:24.180
I did not.
link |
01:01:25.180
I've heard you mention it.
link |
01:01:26.180
Check it out.
link |
01:01:27.180
It's amazing.
link |
01:01:28.180
But even after that, we still don't fly in mechanical birds because it turned out the
link |
01:01:30.180
way we came up with simpler, and it's better for our purposes, and I think it might be the
link |
01:01:34.580
same there.
link |
01:01:35.580
So that's one lesson.
link |
01:01:38.140
Another lesson is one of what our paper was about.
link |
01:01:42.020
Well, first, as a physicist thought, it was fascinating how there's a very close mathematical
link |
01:01:47.420
relationship, actually, between our artificial neural networks.
link |
01:01:50.900
And a lot of things that we've studied for in physics go by nerdy names like the renormalization
link |
01:01:56.580
group equation and Hamiltonians and yada, yada, yada.
link |
01:02:01.100
And when you look a little more closely at this, you have, at first, I was like, well,
link |
01:02:11.380
there's something crazy here that doesn't make sense because we know that if you even
link |
01:02:18.700
want to build a super simple neural network to tell apart cat pictures and dog pictures,
link |
01:02:23.380
right, that you can do that very, very well now.
link |
01:02:27.260
But if you think about it a little bit, you convince yourself it must be impossible because
link |
01:02:31.540
if I have one megapixel, even if each pixel is just black or white, there's two to the
link |
01:02:36.420
power of one million possible images, which is way more than there are atoms in our universe.
link |
01:02:40.900
So in order to, and then for each one of those, I have to assign a number, which is the probability
link |
01:02:47.220
that it's a dog.
link |
01:02:49.100
So an arbitrary function of images is a list of more numbers than there are atoms in our
link |
01:02:55.900
universe.
link |
01:02:56.900
So clearly, I can't store that under the hood of my, my GPU or my, my computer yet somehow
link |
01:03:02.220
works.
link |
01:03:03.220
So what does that mean?
link |
01:03:04.220
Well, it means that out of all of the problems that you could try to solve with a neural network,
link |
01:03:12.940
almost all of them are impossible to solve with a reasonably sized one.
link |
01:03:17.940
But then what we showed in our paper was, was that the, the fraction, the kind of problems,
link |
01:03:24.820
the fraction of all the problems that you could possibly pose that the, that we actually
link |
01:03:29.740
care about given the laws of physics is also an infinitesimally tiny little part.
link |
01:03:34.980
And amazingly, they're basically the same part.
link |
01:03:37.180
Yeah.
link |
01:03:38.180
It's almost like our world was created for, I mean, they kind of come together.
link |
01:03:41.180
Yeah.
link |
01:03:42.180
You, but you could say maybe where the world created the world that the world was created
link |
01:03:44.940
for us, but I have a more modest interpretation, which is that instead evolution endowments
link |
01:03:50.300
with neural networks, precisely for that reason, because this particular architecture has
link |
01:03:54.700
opposed to the one in your laptop is very, very well adapted to solving the kind of problems
link |
01:04:02.380
that nature kept presenting our ancestors with, right?
link |
01:04:05.540
So it makes sense that why do we have a brain in the first place?
link |
01:04:09.380
It's to be able to make predictions about the future and so on.
link |
01:04:12.940
So if we had a sucky system, which could never solve it, it wouldn't have a lot.
link |
01:04:17.580
So, but it's, so this is, this is a, I think a very beautiful fact.
link |
01:04:23.420
Yeah.
link |
01:04:24.420
And you also realize that there's, there, that we, there've been, it's been earlier
link |
01:04:28.780
work on, on why deeper networks are good, but we were able to show an additional cool
link |
01:04:34.140
fact there, which is that even incredibly simple problems, like suppose I give you a
link |
01:04:40.260
thousand numbers and ask you to multiply them together and you can write a few lines of
link |
01:04:45.020
code, boom, done, trivial.
link |
01:04:46.820
If you just try to do that with a neural network that has only one single hidden layer in it,
link |
01:04:52.580
you can do it, but you're going to need two to the power of thousand neurons to multiply
link |
01:04:59.940
a thousand numbers, which is again, more neurons than their atoms in our universe.
link |
01:05:03.260
So that's fascinating.
link |
01:05:05.740
But if you allow, if you allow yourself, make it a deep network of many layers, you only
link |
01:05:11.580
need four thousand neurons, it's perfectly feasible.
link |
01:05:15.340
So that's really interesting.
link |
01:05:17.500
Yeah.
link |
01:05:18.500
Yeah.
link |
01:05:19.500
So architecture type, I mean, you mentioned Schrodinger's equation and what are your thoughts
link |
01:05:24.460
about quantum computing and the role of this kind of computational unit in creating an
link |
01:05:32.860
intelligent system?
link |
01:05:34.900
In some Hollywood movies that I don't mention my name because I don't want to spoil them.
link |
01:05:41.100
The way they get AGI is building a quantum computer because the word quantum sounds
link |
01:05:46.820
cool and so on.
link |
01:05:47.820
That's right.
link |
01:05:48.820
But first of all, I think we don't need quantum computers to build AGI.
link |
01:05:54.940
I suspect your brain is not quantum computer in any found sense.
link |
01:06:01.740
So you don't even wrote a paper about that.
link |
01:06:03.460
Many years ago, I calculated the so called decoherence time that how long it takes until
link |
01:06:09.060
the quantum computerness of what your neurons are doing gets erased by just random noise
link |
01:06:16.900
from the environment and it's about 10 to the minus 21 seconds.
link |
01:06:21.420
So as cool as it would be to have a quantum computer in my head, I don't think that fast.
link |
01:06:27.420
On the other hand, there are very cool things you could do with quantum computers or I think
link |
01:06:35.820
we'll be able to do soon when we get bigger ones that might actually help machine learning
link |
01:06:40.780
do even better than the brain.
link |
01:06:43.180
So for example, one, this is just a moonshot, but hey, learning is very much same thing
link |
01:06:58.620
as search.
link |
01:07:00.860
If you're trying to train a neural network to get really learned to do something really
link |
01:07:05.460
well, you have some loss function, you have a bunch of knobs you can turn represented
link |
01:07:10.820
by a bunch of numbers and you're trying to tweak them so that it becomes as good as possible
link |
01:07:14.420
at this thing.
link |
01:07:15.420
So if you think of a landscape with some valley, where each dimension of the landscape corresponds
link |
01:07:22.580
to some number you can change, you're trying to find the minimum.
link |
01:07:25.780
And it's well known that if you have a very high dimensional landscape, complicated things,
link |
01:07:29.980
it's super hard to find the minimum.
link |
01:07:34.140
Quantum mechanics is amazingly good at this.
link |
01:07:37.500
If I want to know what's the lowest energy state this water can possibly have incredibly
link |
01:07:42.980
hard to compute, but nature will happily figure this out for you if you just cool it down,
link |
01:07:47.860
make it very, very cold.
link |
01:07:50.860
If you put a ball somewhere, it'll roll down to its minimum and this happens metaphorically
link |
01:07:55.260
at the energy landscape too.
link |
01:07:57.620
And quantum mechanics even uses some clever tricks which today's machine learning systems
link |
01:08:02.940
don't.
link |
01:08:03.940
If you're trying to find the minimum and you get stuck in the little local minimum here
link |
01:08:07.940
in quantum mechanics, you can actually tunnel through the barrier and get unstuck again.
link |
01:08:14.180
And that's really interesting.
link |
01:08:15.420
Yeah.
link |
01:08:16.420
So maybe for example, we'll one day use quantum computers that help train neural networks
link |
01:08:22.940
better.
link |
01:08:23.940
That's really interesting.
link |
01:08:24.940
Okay.
link |
01:08:25.940
So as a component of kind of the learning process, for example, let me ask sort of wrapping
link |
01:08:32.020
up here a little bit.
link |
01:08:34.060
Let me return to the questions of our human nature and love, as I mentioned.
link |
01:08:40.540
So do you think you mentioned sort of a helper robot that you could think of also personal
link |
01:08:48.020
robots.
link |
01:08:49.020
Do you think the way we human beings fall in love and get connected to each other is
link |
01:08:55.300
possible to achieve in an AI system and human level AI intelligence system.
link |
01:09:00.420
Do you think we would ever see that kind of connection or, you know, in all this discussion
link |
01:09:06.100
about solving complex goals, as this kind of human social connection, do you think that's
link |
01:09:11.460
one of the goals on the peaks and valleys that were the raising sea levels that we'd
link |
01:09:16.460
be able to achieve?
link |
01:09:17.460
Or do you think that's something that's ultimately, or at least in the short term, relative to
link |
01:09:22.180
the other goals is not achievable?
link |
01:09:23.620
I think it's all possible.
link |
01:09:25.220
And I mean, in recent, there's a very wide range of guesses, as you know, among AI researchers
link |
01:09:31.780
when we're going to get AGI.
link |
01:09:35.300
Some people, you know, like our friend Rodney Brooks said, it's going to be hundreds of
link |
01:09:39.620
years at least.
link |
01:09:41.140
And then there are many others that think it's going to happen relatively much sooner.
link |
01:09:44.780
Recent polls, maybe half or so, AI researchers think we're going to get AGI within decades.
link |
01:09:52.140
So if that happens, of course, then I think these things are all possible.
link |
01:09:56.260
But in terms of whether it will happen, I think we shouldn't spend so much time asking,
link |
01:10:01.860
what do we think will happen in the future?
link |
01:10:04.260
As if we are just some sort of pathetic, passive bystanders, you know, waiting for the future
link |
01:10:08.980
to happen to us, hey, we're the ones creating this future, right?
link |
01:10:12.740
So we should be proactive about it and ask ourselves what sort of future we would like
link |
01:10:18.340
to have happen.
link |
01:10:19.340
That's right.
link |
01:10:20.340
Trying to make it like that.
link |
01:10:21.340
Well, what I prefer is some sort of incredibly boring zombie like future where there's all
link |
01:10:25.660
these mechanical things happening and there's no passion, no emotion, no experience, maybe
link |
01:10:30.220
even.
link |
01:10:31.220
No, I would, of course, much rather prefer it if all the things that we find that we
link |
01:10:35.740
value the most about humanity are a subjective experience, passion, inspiration, love, you
link |
01:10:44.180
know, if we can create a future where those things do exist.
link |
01:10:50.780
You know, I think ultimately it's not our universe giving meaning to us, it's us giving
link |
01:10:56.500
meaning to our universe.
link |
01:10:58.500
And if we build more advanced intelligence, let's make sure we build it in such a way
link |
01:11:03.620
that meaning is part of it.
link |
01:11:09.100
A lot of people that seriously study this problem and think of it from different angles have
link |
01:11:13.900
trouble in the majority of cases, if they think through that happen, are the ones that
link |
01:11:20.140
are not beneficial to humanity.
link |
01:11:22.620
And so, yeah, so what are your thoughts?
link |
01:11:27.260
What should people, you know, I really don't like people to be terrified, what's the way
link |
01:11:33.820
for people to think about it in a way that, in a way we can solve it and we can make it
link |
01:11:38.660
better.
link |
01:11:39.660
Yeah.
link |
01:11:40.660
No, I don't think panicking is going to help in any way, it's not going to increase chances
link |
01:11:44.780
of things going well either.
link |
01:11:46.060
Even if you are in a situation where there is a real threat, does it help if everybody
link |
01:11:49.340
just freaks out?
link |
01:11:50.620
Right.
link |
01:11:51.620
No, of course not.
link |
01:11:53.620
I think, yeah, there are, of course, ways in which things can go horribly wrong.
link |
01:11:59.740
First of all, it's important when we think about this thing, this, about the problems
link |
01:12:04.460
and risks, to also remember how huge the upsides can be if we get it right.
link |
01:12:08.780
Everything we love about society and civilization is a product of intelligence.
link |
01:12:13.420
So if we can amplify our intelligence with machine intelligence and not anymore lose
link |
01:12:17.980
our loved ones, what we're told is an uncurable disease and things like this, of course, we
link |
01:12:23.380
should aspire to that.
link |
01:12:24.940
So that can be a motivator, I think, reminding yourselves that the reason we try to solve
link |
01:12:28.700
problems is not just because we're trying to avoid gloom, but because we're trying to
link |
01:12:34.140
do something great.
link |
01:12:35.900
But then in terms of the risks, I think the really important question is to ask, what
link |
01:12:43.340
can we do today that will actually help make the outcome good, right?
link |
01:12:47.740
And dismissing the risk is not one of them, you know, I find it quite funny often when
link |
01:12:52.700
I'm in discussion panels about these things, how the people who work for companies will
link |
01:13:01.540
always be like, oh, nothing to worry about, nothing to worry about, nothing to worry about.
link |
01:13:05.100
And it's always, it's only academics sometimes express concerns.
link |
01:13:09.980
That's not surprising at all.
link |
01:13:10.980
If you think about it, often Sinclair quipped, right, that it's hard to make a man believe
link |
01:13:17.500
in something when his income depends on not believing in it.
link |
01:13:20.620
And frankly, we know a lot of these people in companies that they're just as concerned
link |
01:13:25.580
as anyone else.
link |
01:13:26.580
But if you're the CEO of a company, that's not something you want to go on record saying
link |
01:13:30.300
when you have silly journalists who are going to put a picture of a Terminator robot when
link |
01:13:34.980
they quote you.
link |
01:13:35.980
So, so the issues are real.
link |
01:13:39.380
And the way I think about what the issue is, is basically, you know, the real choice we
link |
01:13:45.660
have is, first of all, are we going to dismiss this, the risks and say, well, you know, let's
link |
01:13:51.980
just go ahead and build machines that can do everything we can do better and cheaper,
link |
01:13:57.140
you know, let's just make ourselves obsolete as fast as possible or what could possibly
link |
01:14:00.940
go wrong.
link |
01:14:01.940
Right.
link |
01:14:02.940
That's one attitude.
link |
01:14:03.940
The opposite attitude that I think is to say, it's incredible potential, you know, let's
link |
01:14:09.380
think about what kind of future we're really, really excited about.
link |
01:14:14.900
What are the shared goals that we can really aspire towards?
link |
01:14:18.700
And then let's think really hard about how we can actually get there.
link |
01:14:22.100
So start with it.
link |
01:14:23.100
Don't start thinking about the risks.
link |
01:14:24.460
Start thinking about the goals.
link |
01:14:26.940
And then when you do that, then you can think about the obstacles you want to avoid, right?
link |
01:14:30.540
I often get students coming in right here into my office for career advice.
link |
01:14:34.420
Always ask them this very question, where do you want to be in the future?
link |
01:14:38.060
If all she can say is, oh, maybe I'll have cancer, maybe I'll run over by a truck.
link |
01:14:42.580
Focus on the obstacles instead of the goal.
link |
01:14:44.420
She's just going to end up a hypochondriac paranoid, whereas if she comes in and fire
link |
01:14:49.340
in her eyes and is like, I want to be there, and then we can talk about the obstacles and
link |
01:14:54.060
see how we can circumvent them.
link |
01:14:56.100
That's I think a much, much healthier attitude.
link |
01:14:59.100
And that's really what we're in.
link |
01:15:01.540
And I feel it's very challenging to come up with a vision for the future, which we're
link |
01:15:09.420
unequivocally excited about.
link |
01:15:10.660
I'm not just talking now in the vague terms like, yeah, let's cure cancer.
link |
01:15:14.300
I'm talking about what kind of society do we want to create?
link |
01:15:18.500
What do we want it to mean to be human in the age of AI, in the age of AGI?
link |
01:15:25.380
So if we can have this conversation, broad, inclusive conversation, and gradually start
link |
01:15:31.460
converging towards some future with some direction at least that we want to steer towards, right?
link |
01:15:38.100
Then we'll be much more motivated to constructively take on the obstacles.
link |
01:15:42.340
And I think if I wrap this up in a more succinct way, I think we can all agree already now that
link |
01:15:54.260
we should aspire to build AGI that doesn't overpower us, but that empowers us.
link |
01:16:05.540
And think of the many various ways that can do that, whether that's from my side of the
link |
01:16:10.820
world of autonomous vehicles.
link |
01:16:12.860
I'm personally actually from the camp that believes this human level intelligence is
link |
01:16:17.020
required to achieve something like vehicles that would actually be something we would
link |
01:16:22.780
enjoy using and being part of.
link |
01:16:25.380
So that's the one example.
link |
01:16:26.380
And certainly there's a lot of other types of robots and medicine and so on.
link |
01:16:31.140
So focusing on those and then coming up with the obstacles, coming up with the ways that
link |
01:16:35.300
that can go wrong and solving those one at a time.
link |
01:16:38.420
And just because you can build an autonomous vehicle, even if you could build one that
link |
01:16:42.980
would drive this final AGI, maybe there are some things in life that we would actually
link |
01:16:47.500
want to do ourselves.
link |
01:16:48.500
That's right.
link |
01:16:49.500
Right?
link |
01:16:50.500
Like, for example, if you think of our society as a whole, there are some things that we
link |
01:16:54.660
find very meaningful to do.
link |
01:16:57.540
And that doesn't mean we have to stop doing them just because machines can do them better.
link |
01:17:02.100
I'm not going to stop playing tennis just the day someone builds a tennis robot and
link |
01:17:06.660
beat me.
link |
01:17:07.660
People are still playing chess and even go.
link |
01:17:09.900
Yeah.
link |
01:17:10.900
And in this very near term, even some people are advocating basic income, replace jobs.
link |
01:17:19.100
But if the government is going to be willing to just hand out cash to people for doing
link |
01:17:22.780
nothing, then one should also seriously consider whether the government should also just hire
link |
01:17:27.660
a lot more teachers and nurses and the kind of jobs which people often find great fulfillment
link |
01:17:33.380
in doing, right?
link |
01:17:34.380
We get very tired of hearing politicians saying, oh, we can't afford hiring more teachers,
link |
01:17:38.900
but we're going to maybe have basic income.
link |
01:17:41.700
If we can have more serious research and thought into what gives meaning to our lives, the
link |
01:17:46.340
jobs give so much more than income, right?
link |
01:17:50.700
And then think about, in the future, what are the roles that we want to have people
link |
01:18:00.020
continually feeling empowered by machines?
link |
01:18:03.180
And I think sort of, I come from the Russia, from the Soviet Union, and I think for a lot
link |
01:18:08.900
of people in the 20th century, going to the moon, going to space was an inspiring thing.
link |
01:18:14.100
I feel like the universe of the mind, so AI, understanding, creating intelligence is that
link |
01:18:21.300
for the 21st century.
link |
01:18:23.380
So it's really surprising, and I've heard you mention this, it's really surprising to
link |
01:18:26.740
me both on the research funding side that it's not funded as greatly as it could be.
link |
01:18:31.940
But most importantly, on the politician side, that it's not part of the public discourse
link |
01:18:36.500
except in killer bots, terminator kind of view, that people are not yet, I think, perhaps
link |
01:18:44.300
excited by the possible positive future that we can build together.
link |
01:18:48.260
So we should be, because politicians usually just focus on the next election cycle, right?
link |
01:18:54.660
The single most important thing I feel we humans have learned in the entire history of science
link |
01:18:59.340
is they were the masters of underestimation, underestimated the size of our cosmos, again
link |
01:19:07.460
and again, realizing that everything we thought existed was just a small part of something
link |
01:19:11.380
grander, right?
link |
01:19:12.380
Planet, solar system, the galaxy, clusters of galaxies, the universe.
link |
01:19:18.580
And we now know that we have the future has just so much more potential than our ancestors
link |
01:19:25.700
could ever have dreamt of.
link |
01:19:27.820
This cosmos, imagine if all of Earth was completely devoid of life except for Cambridge, Massachusetts.
link |
01:19:39.820
Wouldn't it be kind of lame if all we ever aspired to was to stay in Cambridge, Massachusetts
link |
01:19:44.220
forever and then go extinct in one week, even though Earth was going to continue on for
link |
01:19:49.660
longer?
link |
01:19:50.660
That sort of attitude I think we have now on the cosmic scale, we can, life can flourish
link |
01:19:57.300
on Earth, not for four years, but for billions of years.
link |
01:20:00.820
I can even tell you about how to move it out of harm's way when the sun gets too hot.
link |
01:20:06.340
And then we have so much more resources out here, which today, maybe there are a lot of
link |
01:20:11.900
other planets with bacteria or cow like life on them, but most of this, all this opportunity
link |
01:20:19.380
seems as far as we can tell to be largely dead, like the Sahara Desert, and yet we have the
link |
01:20:25.380
opportunity to help life flourish around this for billions of years.
link |
01:20:30.380
So like, let's quit squabbling about whether some little border should be drawn one mile
link |
01:20:37.420
to the left or right and look up into the skies and realize, hey, we can do such incredible
link |
01:20:43.380
things.
link |
01:20:44.380
Yeah.
link |
01:20:45.380
And that's I think why it's really exciting that you and others are connected with some
link |
01:20:49.980
of the work Elon Musk is doing because he's literally going out into that space, really
link |
01:20:54.740
exploring our universe.
link |
01:20:56.260
And it's wonderful.
link |
01:20:57.260
That is exactly why Elon Musk is so misunderstood, right?
link |
01:21:02.340
Misconstrued with some kind of pessimistic doomsayer.
link |
01:21:05.300
The reason he cares so much about AI safety is because he more than almost anyone else
link |
01:21:10.860
appreciates these amazing opportunities.
link |
01:21:13.340
It will squander if we wipe out here on Earth.
link |
01:21:16.340
We're not just going to wipe out the next generation, but all generations and this incredible
link |
01:21:22.740
opportunity that's out there and that would be really be a waste.
link |
01:21:25.580
And AI, for people who think that there would be better to do without technology, let me
link |
01:21:32.740
just mention that if we don't improve our technology, the question isn't whether humanity
link |
01:21:37.740
is going to go extinct.
link |
01:21:38.740
The question is just whether we're going to get taken out by the next big asteroid or
link |
01:21:43.620
the next super volcano or something else dumb that we could easily prevent with more tech,
link |
01:21:49.540
right?
link |
01:21:50.540
If we want life to flourish throughout the cosmos, AI is the key to it.
link |
01:21:56.220
As I mentioned in a lot of detail in my book, even many of the most inspired sci fi writers
link |
01:22:04.780
I feel have totally underestimated the opportunities for space travel, especially to other galaxies,
link |
01:22:11.580
because they weren't thinking about the possibility of AGI, which just makes it so much easier.
link |
01:22:17.100
Right.
link |
01:22:18.100
Yeah, so that goes to a view of AGI that enables our progress, that enables a better life.
link |
01:22:25.900
So that's a beautiful way to put it and something to strive for.
link |
01:22:30.060
So Max, thank you so much.
link |
01:22:31.580
Thank you for your time today.
link |
01:22:32.580
It's been awesome.
link |
01:22:33.580
Thank you so much.
link |
01:22:34.580
Thanks.
link |
01:22:35.580
Merci beaucoup.
link |
01:22:36.580
Thank you so much for your time today and thank you so much for your time and for your
link |
01:22:49.100
time.
link |
01:22:50.100
Thank you.
link |
01:22:51.100
Thank you.
link |
01:22:52.100
Bye.
link |
01:22:53.100
Bye.
link |
01:22:54.100
Bye.
link |
01:22:55.100
Bye.
link |
01:22:56.100
Bye.
link |
01:22:57.100
Bye.
link |
01:22:58.100
Bye.
link |
01:22:59.100
Bye.