back to indexDemis Hassabis: DeepMind - AI, Superintelligence & the Future of Humanity | Lex Fridman Podcast #299
link |
The following is a conversation with Demis Hassabis, CEO and co founder of DeepMind,
link |
a company that has published and built some of the most incredible artificial intelligence
link |
systems in the history of computing, including AlphaZero that learned all by itself to play
link |
the game of go better than any human in the world and AlphaFold 2 that solved protein folding.
link |
Both tasks considered nearly impossible for a very long time. Demis is widely considered to be
link |
one of the most brilliant and impactful humans in the history of artificial intelligence and
link |
science and engineering in general. This was truly an honor and a pleasure for me to finally sit
link |
down with him for this conversation and I'm sure we will talk many times again in the future.
link |
This is the Lux Friedman podcast. To support it, please check out our sponsors in the description
link |
and now, dear friends, here's Demis Hassabis. Let's start with a bit of a personal question.
link |
Am I an AI program you wrote to interview people until I get good enough to interview you?
link |
Well, I'd be impressed if you were. I'd be impressed with myself if you were.
link |
I don't think we're quite up to that yet, but maybe you're from the future, Lux.
link |
If you did, would you tell me? Is that a good thing to tell a language model that's tasked
link |
with interviewing that it is, in fact, AI? Maybe we're in a kind of meta chewing test.
link |
Probably it would be a good idea not to tell you, so it doesn't change your behavior, right?
link |
This is a kind of... Heisenberg uncertainty principle situation. If I told you, you behaved
link |
differently. Maybe that's what's happening with us, of course.
link |
This is a benchmark from the future where they replay 2022 as a year before AIs were good enough
link |
yet and now we want to see, is it going to pass? Exactly.
link |
If I was such a program, would you be able to tell, do you think?
link |
So to the touring test question, you've talked about the benchmark for solving intelligence.
link |
What would be the impressive thing? You talked about winning a Nobel Prize and
link |
ass system winning a Nobel Prize, but I still returned to the touring test as a compelling
link |
test, the spirit of the touring test as a compelling test.
link |
Yeah, the chewing test, of course, it's been unbelievably influential and chewing is one
link |
of my all time heroes, but I think if you look back at the 1950 papers, original paper and read
link |
the original, you'll see I don't think he meant it to be a rigorous formal test. I think it was
link |
more like a thought experiment, almost a bit of philosophy he was writing if you look at the style
link |
of the paper. And you can see he didn't specify it very rigorously. So for example, he didn't
link |
specify the knowledge that the expert or judge would have, not how much time would they have to
link |
investigate this. So these are important parameters if you were going to make it a true sort of
link |
formal test. And by some measures, people claim the touring test passed several decades ago.
link |
I remember someone claiming that with a kind of very bog standard normal logic model, because
link |
they pretended it was a kid. So the judges thought that the machine was a child. So that would be
link |
very different from an expert AI person interrogating a machine and knowing how it was built and so
link |
on. So I think we should probably move away from that as a formal test and move more towards
link |
a general test where we test the AI capabilities on a range of tasks and see if it reaches human
link |
level or above performance on maybe thousands, perhaps even millions of tasks eventually,
link |
and cover the entire sort of cognitive space. So I think for its time, it was an amazing
link |
thought experiment. And also 1950s, obviously, there's barely the dawn of the computer age.
link |
So of course, he only thought about text. And now we have a lot more different inputs.
link |
So yeah, maybe the better thing to test is the generalizability. So across multiple tasks,
link |
but I think it's also possible as as systems like God will show that eventually that might
link |
map right back to language. So you might be able to demonstrate your ability to generalize across
link |
tasks by then communicating your ability to generalize across tasks, which is kind of what
link |
we do through conversation anyway, when we jump around. Ultimately, what's in there in that
link |
conversation is not just you moving around knowledge. It's you moving around like these
link |
entirely different modalities of understanding that ultimately map to your ability to
link |
operate successfully in all of these domains, which you can think of as tasks.
link |
Yeah, I think certainly we as humans use language as our main generalization communication tool. So
link |
I think we end up thinking in language and expressing our solutions in language. So it's
link |
going to be very powerful mode in which to explain the system to explain what it's doing.
link |
But I don't think it's the only modality that matters. So I think there's going to be a lot
link |
of, you know, there's a lot of different ways to express capabilities other than just language.
link |
Yeah, visual, robotics, body language.
link |
Yeah, actions, the interactive aspect of all that, that's all part of it.
link |
But what's interesting with GATO is that it's sort of pushing prediction to the maximum in
link |
terms of like mapping arbitrary sequences to other sequences and sort of just predicting
link |
what's going to happen next. So prediction seems to be fundamental to intelligence.
link |
And what you're predicting doesn't so much matter.
link |
Yeah, it seems like you can generalize that quite well. So obviously language models predict the
link |
next word. GATO predicts potentially any action or any token. And it's just the beginning really.
link |
It's our most general agent one could call it so far. But, you know, that itself can be scaled
link |
up massively more than we've done so far. And obviously we're in the middle of doing that.
link |
But the big part of solving AGI is creating benchmarks that help us get closer and closer,
link |
sort of creating benchmarks that test the generalizability. And it's just still interesting
link |
that this fella, Alan Turing, was one of the first and probably still one of the only people that
link |
was trying, maybe philosophically, but was trying to formulate a benchmark that could be followed.
link |
It is, even though it's fuzzy, it's still sufficiently rigorous to where you can run that
link |
test. And I still think something like the Turing test will, at the end of the day,
link |
be the thing that truly impresses other humans so that you can have a close friend
link |
who's an AI system. And for that friend to be a good friend, they're going to have to be able to
link |
play StarCrafts. And they're going to have to do all of these tasks, get you a beer,
link |
so the robotics tasks, play games with you, use language, humor, all those kinds of things. But
link |
that ultimately can blow down to language. It feels like, not in terms of the AI community,
link |
but in terms of the actual impact of general intelligence on the world, it feels like language
link |
will be the place where it truly shines. I think so, because it's such an important kind of input
link |
output for us. I think you're right. I think the Turing test, the philosophy behind it,
link |
which is the idea of, can a machine mimic the behaviors of a human? And I would say,
link |
wider than just language and text, then in terms of actions and everything else,
link |
creativity, all these things, then if it can match or exceed human cognitive capabilities,
link |
then I think we have a true intelligence. So from that perspective, you're right. I think he did
link |
formulate the right kind of setup. I think there'll be a kind of humor
link |
in the AI systems of the future, looking back to this conversation and thinking about the Turing
link |
test and also thinking about, by that time, they would know which year they were finally able to
link |
sort of cross the threshold of human level intelligence and think how funny it is that
link |
we humans were still confused about this whole problem that's been long ago been solved.
link |
Well, hello to the AI agents of the future. Anyway, so going back to your journey,
link |
when did you fall in love with programming first? Well, it was pretty young age, actually. So
link |
I started off, actually, games was my first love. So starting to play chess when I was around four
link |
years old. And then it was actually with winnings from a chess competition that I managed to buy
link |
my first chess computer when I was about eight years old was a ZX Spectrum, which was hugely
link |
popular in the UK at the time. And it was an amazing machine because I think it trained a
link |
whole generation of programmers in the UK because it was so accessible. You literally switched it
link |
on and there was the basic prompt and you could just get going. And my parents didn't really
link |
know anything about computers. But because it was my money from a chess competition,
link |
I could say I wanted to buy it. And then I just went to bookstores, got books on programming,
link |
and started typing in the programming code. And then, of course, once you start doing that,
link |
you start adjusting it and then making your own games. And that's when I fell in love with computers
link |
and realized that they were a very magical device. In a way, I don't want to have been able to
link |
explain this at the time, but I felt that there was almost a magical extension of your mind.
link |
I always had this feeling, and I've always loved this about computers, that you can set them off
link |
doing something, some task for you, you can go to sleep, come back the next day and it's solved.
link |
That feels magical to me. All machines do that to some extent. They all enhance our
link |
natural capabilities. Obviously, cars make us allow us to move faster than we can run.
link |
But this was a machine to extend the mind. And then, of course, AI is the ultimate expression
link |
of what a machine may be able to do or learn. So, very naturally for me, that thought extended
link |
into AI quite quickly. Do you remember the programming language that was first started?
link |
Yeah. Was it special to the machine? No, it was just a basic. I think it was just basic
link |
on the ZX Spectrum. I don't know what specific form it was. And then later on, I got a Commodore
link |
Amiga, which was a fantastic machine. Now you're just showing off. So, yeah, well,
link |
lots of my friends had Atari STs and I managed to get Amigas. It was a bit more powerful.
link |
And that was incredible. And I used to do programming in Assembler and also AmosBasic,
link |
this specific form of basic. It was incredible, actually. So, all my coding skills.
link |
And when did you fall in love with AI? So, when did you first start to gain an understanding
link |
that you can not just write programs that do some mathematical operations for you while you sleep,
link |
but something that's akin to bringing an entity to life, sort of a thing that can
link |
figure out something more complicated than a simple mathematical operation.
link |
Yeah. So, there was a few stages for me all while I was very young. So, first of all,
link |
as I was trying to improve at playing chess, I was captaining various England junior chess
link |
teams. And at the time, when I was about maybe 10, 11 years old, I was going to become a professional
link |
chess player. That was my first thought. So, that dream was there to try to get to the highest
link |
levels of chess. Yeah. So, when I was about 12 years old, I got to master stand and I was
link |
second highest rated player in the world to Judith Polgar, who obviously ended up being
link |
an amazing chess player and a world women's champion. And when I was trying to improve at
link |
chess, what you do is you obviously, first of all, you're trying to improve your own thinking
link |
processes. So, that leads you to thinking about thinking. How is your brain coming up with these
link |
ideas? Why is it making mistakes? How can you improve that thought process? But the second
link |
thing is that you, it was just the beginning, this was like in the early 80s, mid 80s of
link |
chess computers. If you remember, they were physical balls like the one we have in front
link |
of us and you press down the squares. And I think Kasparov had a branded version of it that I got.
link |
And you were used to, they're not as strong as they are today, but they were pretty strong
link |
and used to practice against them to try and improve your openings and other things.
link |
And so, I remember, I think I probably got my first one, I was around 11 or 12. And I remember
link |
thinking, this is amazing, you know, how someone programmed this chess board to play chess.
link |
And it was very formative book I bought, which was called The Chess Computer Handbook
link |
by David Levy. It came out in 1984 or something. So I must have got it when I was about 11, 12.
link |
And it explained fully how these chess programs were made. And I remember my first AI program
link |
being programming my Amiga. It wasn't powerful enough to play chess. I couldn't write a whole
link |
chess program, but I wrote a program for it to play Othello, or reverse it, sometimes called,
link |
I think, in the US. And so a slightly simpler game than chess. But I used all of the principles
link |
that chess programs had, alpha, beta, search, all of that. And that was my first AI program.
link |
I remember that very well. I was around 12 years old. So that brought me into AI.
link |
And then the second part was later on, around 16, 17, and I was writing games professionally,
link |
designing games, writing a game called Theme Park, which had AI as a core gameplay component
link |
as part of the simulation. And it sold, you know, millions of copies around the world.
link |
And people loved the way that the AI, even though it was relatively simple by today's AI standards,
link |
was reacting to the way you, as the player, played it. So it was called a sandbox game.
link |
So it was one of the first types of games like that, along with SimCity. And it meant that
link |
every game you played was unique. Is there something you could say, just on a small tangent,
link |
about really impressive AI from a game design, human enjoyment perspective,
link |
really impressive AI that you've seen in games? And maybe what does it take to create AI system?
link |
And how hard of a problem is that? So a million questions, just as a brief tangent.
link |
Well, look, I think games have been significant in my life for three reasons. So first of all,
link |
I was playing them and training myself on games when I was a kid. Then I went through a phase of
link |
designing games and writing AI for games. So all the games I professionally wrote
link |
had AI as a core component. And that was mostly in the 90s. And the reason I was doing that in
link |
games industry was at the time, the games industry, I think, was the cutting edge of technology.
link |
So whether it was graphics with people like John Carmack and Quake and those kind of things,
link |
or AI, I think actually all the action was going on in games. And we're still reaping the benefits
link |
of that, even with things like GPUs, which I find ironic was obviously invented for graphics,
link |
computer graphics, but then turns out to be amazingly useful for AI. It just turns out
link |
everything's a matrix multiplication. It appears in the whole world. So I think games at the time
link |
had the most cutting edge AI. And a lot of the games, I was involved in writing. So there was
link |
a game called Black and White, which was one game I was involved with in the early stages of,
link |
which I still think is the most impressive example of reinforcement learning in a computer game.
link |
So in that game, you trained a little pet animal. It's a brilliant game.
link |
Yeah. And it sort of learned from how you were treating it. So if you treated it badly,
link |
then it became mean. And then it would be mean to your villagers and your population,
link |
the little tribe that you were running. But if you were kind to it, then it would be kind.
link |
And people were fascinated by how that worked. And so as I had to be honest with the way it
link |
kind of developed. Especially the mapping to good and evil. It made you realize,
link |
made me realize that you can sort of in the way in the choices you make can define where you
link |
end up. And that means all of us are capable of the good evil. It all matters in the different
link |
choices along the trajectory to those places that you make. It's fascinating. I mean,
link |
games can do that philosophically to you. And it's rare. It seems rare.
link |
Yeah. Well, games I think are unique medium because you as the player, you're not just
link |
passively consuming the entertainment, right? You're actually actively involved as an agent.
link |
So I think that's what makes it in some ways can be more visceral than other mediums like
link |
films and books. So the second, so that was designing AI in games. And then the third use
link |
we've used of AI is indeed mind from the beginning, which is using games as a testing ground for
link |
proving out AI algorithms and developing AI algorithms. And that was a sort of a core component
link |
of our vision at the start of DeepMind was that we would use games very heavily as our main testing
link |
ground, certainly to begin with, because it's super efficient to use games. And also, it's very
link |
easy to have metrics to see how well your systems are improving and what direction your ideas are
link |
going in and whether you're making incremental improvements. And because those games are often
link |
rooted in something that humans did for a long time beforehand, there's already a strong set of
link |
rules like it's already a damn good benchmark. Yes, it's really good for so many reasons because
link |
you've got clear measures of how good humans can be at these things. And in some cases like Go,
link |
we've been playing it for thousands of years. And often they have scores or at least win conditions.
link |
So it's very easy for reward learning systems to get a reward. It's very easy to specify what
link |
that reward is. And also at the end, it's easy to test externally how strong is your system
link |
by, of course, playing against the world's strongest players at those games. So it's so good
link |
for so many reasons. And it's also very efficient to run potentially millions of simulations
link |
in parallel on the cloud. So I think there's a huge reason why we were so successful back in
link |
the starting out 2010, how come we were able to progress so quickly because we'd utilize games.
link |
And at the beginning of DeepMind, we also hired some amazing game engineers who I knew from my
link |
previous lives in the games industry. And that helped to bootstrap us very quickly.
link |
And plus it's somehow super compelling, almost at a philosophical level of man versus machine
link |
over over a chessboard or a Go board. And especially given that the entire history of AI
link |
is defined by people saying it's going to be impossible to make a machine that beats a human
link |
being in chess. And then once that happened, people were certain when I was coming up in AI,
link |
that Go is not a game that can be solved because of the combinatorial complexity. It's just too,
link |
it's, you know, no matter how much Moore's law you have, compute is just never going to be able
link |
to crack the game of Go. And so then there's something compelling about facing sort of
link |
taking on the impossibility of that task from the AI researcher perspective,
link |
engineer perspective. And then as a human being just observing this whole thing,
link |
your beliefs about what you thought was impossible being broken apart. It's humbling
link |
to realize we're not as smart as we thought. It's humbling to realize that the things we
link |
think are impossible now perhaps will be done in the future. There's something really powerful
link |
about a game, AI system being a human being in a game that drives that message home for like
link |
millions, billions of people, especially in the case of Go. Sure. Well, look, I think it's,
link |
I mean, it has been a fascinating journey. And especially as I think about it from, I can
link |
understand it from both sides, both as the AI creators of the AI, but also as a games player
link |
originally. So it was a really interesting, I mean, it was a fantastic, but also somewhat
link |
bittersweet moment, the AlphaGo match for me seeing that and being obviously heavily
link |
involved in that. But as you say, Chess has been the, I mean, Kasparov, I think rightly called it
link |
the Drosophila of intelligence, right? So it's sort of, I love that phrase. And I think he's
link |
right because Chess has been hand in hand with AI from the beginning of the whole field, right?
link |
So I think every AI practitioner starting with Turing and Claude Shannon and all those,
link |
the sort of forefathers of the field, tried their hand at writing a chess program. I've got
link |
an original edition of Claude Shannon's first chess program. I think it was 1949, the original
link |
sort of paper. And they all did that and Turing famously wrote a chess program that all the
link |
computers around them were obviously too slow to run it. So he had to run, he had to be the
link |
computer, right? So he literally, I think spent two or three days running his own program by hand
link |
with pencil and paper and playing a friend of his with his chess program. So of course,
link |
Deep Blue was a huge moment beating Kasparov. But actually, when that happened, I remember that very,
link |
very vividly, of course, because it was Chess and computers and AI, all the things I loved. And I
link |
was at college at the time. But I remember coming away from that, being more impressed by Kasparov's
link |
mind than I was by Deep Blue. Because here was Kasparov with his human mind, not only could he
link |
play chess more or less to the same level as this brute of a calculation machine. But of course,
link |
Kasparov can do everything else humans can do, ride a bike, talk many languages, do politics,
link |
all the rest of the amazing things that Kasparov does. And so with the same brain. And yet Deep
link |
Blue, brilliant as it was at chess, it'd been hand coded for chess and actually had distilled
link |
the knowledge of chess grandmasters into a cool program. But it couldn't do anything else.
link |
Like it couldn't even play a strictly simpler game like Tic Tac Toe. So something to me was missing
link |
from intelligence from that system that we would regard as intelligence. And I think it was this
link |
idea of generality and also learning. So that's what we tried to do with AlphaGo.
link |
Yeah, with AlphaGo and AlphaZero, MuZero, and then God on all the things that we'll get into some
link |
parts of there's just a fascinating trajectory here. But let's just stick on chess briefly.
link |
On the human side of chess, you've proposed that from a game design perspective, the thing that
link |
makes chess compelling as a game is that there's a creative tension between a bishop and the knight.
link |
Can you explain this? First off, it's really interesting to think about what makes a game
link |
compelling. It makes it stick across centuries. Yeah, I was sort of thinking about this. And
link |
actually a lot of even amazing chess players don't think about it necessarily from a game's
link |
designer point of view. So it's with my game design hat on that I was thinking about this.
link |
Why is chess so compelling? And I think a critical reason is the dynamicness of the
link |
different kind of chess positions you can have, whether they're closed or open and other things
link |
comes from the bishop and the knight. So if you think about how different the capabilities of
link |
the bishop and knight are in terms of the way they move, and then somehow chess has evolved
link |
to balance those two capabilities more or less equally. So they're both roughly worth three
link |
points each. So you think that dynamics is always there, and then the rest of the rules
link |
are kind of trying to stabilize the game? Well, maybe. I mean, it's sort of, I don't know if
link |
chicken and egg situation probably both came together. But the fact that it's got to this
link |
beautiful equilibrium where you can have the bishop and knight, they're so different in power,
link |
but so equal in value across the set of the universe of all positions,
link |
right? Somehow they've been balanced by humanity over hundreds of years. I think gives the game
link |
the creative tension that you can swap the bishop and knights for a bishop for a knight,
link |
and they're more or less the worth the same. But now you aim for a different type of position.
link |
If you have the knight, you want a closed position. If you have the bishop, you want an open position.
link |
So I think that creates a lot of the creative tension in chess.
link |
So some kind of controlled creative tension. From an AI perspective, do you think AI systems
link |
convention design games that are optimally compelling to humans?
link |
Well, that's an interesting question. Sometimes I get asked about AI and creativity, and the way
link |
I answered that is relevant to that question, which is that I think they're different levels
link |
of creativity, one could say. So I think if we define creativity as coming up with something
link |
original that's useful for a purpose, then I think the kind of lowest level of creativity
link |
is like an interpolation. So an averaging of all the examples you see. So maybe a very basic AI
link |
system could say you could have that. So you show it millions of pictures of cats, and then you say,
link |
give me an average looking cat, generate me an average looking cat. I would call that interpolation.
link |
Then there's extrapolation, which something like AlphaGo showed. So AlphaGo played millions of games
link |
of go against itself. And then it came up with brilliant new ideas like move 37 in game two,
link |
brilliant motif strategies in go that no humans had ever thought of, even though we've played it
link |
for thousands of years and professionally for hundreds of years. So that I call that extrapolation.
link |
But then there's still a level above that, which is you could call out of the box thinking or true
link |
innovation, which is could you invent go? Could you invent chess and not just come up with a
link |
brilliant chess move or brilliant go move, but can you actually invent chess or something as good
link |
as chess or go? And I think one day AI could, but what's missing is how would you even specify
link |
that task to a program right now? And the way I would do it, if I was telling a human to do it,
link |
or a game designer, a human game designer to do it, is I would say something like go, I would say,
link |
come up with a game that only takes five minutes to learn, which go does because it's got simple
link |
rules, but many lifetimes to master, right, or impossible to master in one lifetime because
link |
it's so deep and so complex. And then it's aesthetically beautiful. And also, it can be
link |
completed in three or four hours of gameplay time, which is useful for us in a human day.
link |
And so you might specify these sort of high level concepts like that. And then with that,
link |
and maybe a few other things, one could imagine that go satisfies those constraints.
link |
But the problem is, is that we're not able to specify abstract notions like that,
link |
high level abstract notions like that yet, to our AI systems. And I think there's still
link |
something missing there in terms of high level concepts or abstractions that they truly understand
link |
and they're combinable and compositional. So for the moment, I think AI is capable of doing
link |
interpolation and extrapolation, but not true invention.
link |
So coming up with rule sets and optimizing with complicated objectives around those rule sets,
link |
we can't currently do. But you could take a specific rule set, and then run a kind of
link |
self play experiment to see how long, just observe how an AI system from scratch learns,
link |
how long is that journey of learning. And maybe if it satisfies some of those other things you
link |
mentioned, in terms of quickness to learn and so on, and you could see a long journey to master for
link |
even an AI system, then you could say that this is a promising game. But it would be nice to do
link |
almost like alpha codes or programming rules. So generating rules that automate even that part
link |
of the generation of rules. So I have thought about systems actually that I think would be
link |
amazing for a games designer. If you could have a system that takes your game, plays it tens of
link |
millions of times, maybe overnight, and then self balances the rules better. So it tweaks the rules
link |
and maybe the equations and the parameters so that the game is more balanced, the units in the game,
link |
or some of the rules could be tweaked. So it's a bit of like giving a base set and then allowing
link |
Monte Carlo tree search or something like that to sort of explore it. And I think that would be
link |
super, super powerful tool actually for balancing, auto balancing a game, which usually takes thousands
link |
of hours from hundreds of human games testers normally to balance one game like StarCraft,
link |
which is, you know, Blizzard are amazing at balancing their games, but it takes them years
link |
and years and years. So one could imagine at some point when this stuff becomes efficient
link |
enough to, you know, you might better do that overnight. Do you think a game that is optimal,
link |
designed by an AI system, would look very much like Planet Earth?
link |
Maybe, maybe it's only the sort of game I would love to make is, and I've tried, you know, in my
link |
games career, the games design career, you know, my first big game was designing a theme park,
link |
an amusement park. Then with games like Republic, I tried to, you know, have games where we designed
link |
whole cities and allowed you to play in. So, and of course, people like Will Wright have written
link |
games like SimEarth, trying to simulate the whole of Earth, pretty tricky. But I think...
link |
SimEarth, I haven't actually played that one. So what is it? Does it incorporate an evolution?
link |
Yeah, it has evolution. And it sort of, it tries to, it sort of treats it as an entire biosphere,
link |
but from quite high level. So...
link |
It'd be nice to be able to sort of zoom in, zoom out and zoom in.
link |
Exactly. So obviously it couldn't do that. That was in the night. I think he wrote that in the 90s.
link |
So it couldn't, you know, it wasn't able to do that. But that would be, obviously,
link |
the ultimate sandbox game, of course.
link |
On that topic, do you think we're living in a simulation?
link |
Yes. Well, so, okay, so...
link |
We're going to jump around from the absurdly philosophical to the technical.
link |
Sure, sure. Very, very happy to. So I think my answer to that question is a little bit complex,
link |
because there is simulation theory, which obviously Nick Bostrom, I think, famously first proposed.
link |
And I don't quite believe it in that sense. So in the sense that are we in some sort of computer
link |
game, or have our descendants somehow recreated Earth in the 21st century, and for some kind
link |
of experimental reason? I think that, but I do think that we might be, that the best way to
link |
understand physics and the universe is from a computational perspective. So understanding it
link |
as an information universe and actually information being the most fundamental unit of reality,
link |
rather than matter or energy. So physicists would say, you know, matter or energy, you know,
link |
E equals MC squared, these are the things that are the fundamentals of the universe.
link |
I'd actually say information, which of course itself can be, can specify energy or matter,
link |
right? Matter is actually just, you know, we're just out the way our bodies and the molecules
link |
in our body arrange is information. So I think information may be the most fundamental way to
link |
describe the universe. And therefore, you could say we're in some sort of simulation because of that.
link |
But I don't, I do, I'm not really subscribed, but to the idea that, you know, these are sort
link |
of throw away billions of simulations around. I think this is actually very critical and possibly
link |
unique this simulation. This particular one. Yes. But and you just mean treating the universe
link |
as a computer that's processing and modifying information is a good way to solve the problems
link |
of physics, of chemistry, of biology, and perhaps of humanity and so on. Yes. I think
link |
understanding physics in terms of information theory might be the best way to really understand
link |
what's going on here. From our understanding of a universal Turing machine, from our understanding
link |
of a computer, do you think there's something outside of the capabilities of a computer that
link |
is present in our universe? You have a disagreement with Roger Penrose about the nature of consciousness.
link |
He thinks that consciousness is more than just a computation. Do you think all of it,
link |
the whole shebangs can be a computation? Yeah, I've had many fascinating debates
link |
with Roger Penrose. And obviously, he's famously, and I read Emperor's New Mind and his books,
link |
his classical books, and they were pretty influential in the 90s. And he believes that
link |
there's something more, something quantum that is needed to explain consciousness in the brain.
link |
I think about what we're doing actually at DeepMind and what my career is being,
link |
we're almost like Turing's champion. So we are pushing Turing machines or classical computation
link |
to the limits. What are the limits of what classical computing can do?
link |
And at the same time, I've also studied neuroscience to see, and that's why I did my PhD in, was to
link |
see also to look at, is there anything quantum in the brain from a neuroscience or biological
link |
perspective? And so far, I think most neuroscientists and most mainstream biologists and neuroscientists
link |
would say there's no evidence of any quantum systems or effects in the brain. As far as we
link |
can see, it can be mostly explained by classical theories. And then so there's sort of the search
link |
from the biology side. And then at the same time, there's the raising of the water at the bar from
link |
what classical Turing machines can do and including our new AI systems. And as you alluded to earlier,
link |
I think AI, especially in the last decade plus, has been a continual story now of surprising
link |
events and surprising successes, knocking over one theory after another of what was
link |
thought to be impossible from go to protein folding and so on. And so I think I would
link |
be very hesitant to bet against how far the universal Turing machine and classical computation
link |
paradigm can go. And my betting would be that all of certainly what's going on in our brain
link |
can probably be mimicked or approximated on a classical machine, not requiring
link |
something metaphysical or quantum. And we'll get there with some of the work with AlphaFold,
link |
which I think begins the journey of modeling this beautiful and complex world of biology.
link |
So you think all the magic of the human mind comes from this, just a few pounds of mush,
link |
a biological computational mush that's akin to some of the neural networks,
link |
not directly but in spirit that DeepMind has been working with.
link |
Well, look, I think it's, you say it's a few, you know, of course, this is the, I think,
link |
the biggest miracle of the universe is that it's just a few pounds of mush in our skulls. And yet
link |
it's also our brains are the most complex objects that we know of in the universe.
link |
So there's something profoundly beautiful and amazing about our brains. And I think that it's
link |
an incredibly, incredible efficient machine. And it's, you know, a phenomenon basically.
link |
And I think that building AI, one of the reasons I want to build AI, and I've always wanted to, is
link |
I think by building an intelligent artifact like AI, and then comparing it to the human
link |
mind, that will help us unlock the uniqueness and the true secrets of the mind that we've always
link |
wondered about since the dawn of history, like consciousness, dreaming, creativity,
link |
emotions, what are all these things, right? We've wondered about them since the dawn of humanity.
link |
And I think one of the reasons, and you know, I love philosophy and philosophy of mind is
link |
we found it difficult is there haven't been the tools for us to really other than introspection
link |
to from very clever people in history, very clever philosophers to really investigate this
link |
scientifically. But now, suddenly, we have a plethora of tools. Firstly, we have all the
link |
neuroscience tools, fMRI machines, single cell recording, all of this stuff. But we also have
link |
the ability computers and AI to build intelligent systems. So I think that, you know, I think it
link |
is amazing what the human mind does. And I'm kind of in awe of it really. And I think it's amazing
link |
that with our human minds, we're able to build things like computers and actually even, you know,
link |
think and investigate about these questions. I think that's also a testament to the human mind.
link |
Yeah, the universe built the human mind that now is building computers that help us understand
link |
both the universe and our own human mind. That's right. It's exactly it. I mean, I think that's one,
link |
you know, one could say we are maybe we're the mechanism by which the universe is going to try
link |
to understand itself. Yeah, it's beautiful. So let's let's go to the basic building blocks of
link |
biology that I think is another angle at which you can start to understand the human mind,
link |
the human body, which is quite fascinating, which is from the basic building blocks,
link |
start to simulate, start to model how from those building blocks, you can construct bigger and
link |
bigger, more complex systems, maybe one day, the entirety of the human biology. So here's another
link |
problem that thought to be impossible to solve, which is protein folding and alpha fold or
link |
specific alpha fold to did just that. It's solved protein folding. I think it's one of the biggest
link |
breakthroughs, certainly in the history of structural biology, but in general, in science,
link |
maybe from a high level, what is it and how does it work? And then we can ask some fascinating
link |
questions after. Sure. So maybe to explain it to people not familiar with protein folding is,
link |
you know, first of all, explain proteins, which is, you know, proteins are essential to all life.
link |
Every function in your body depends on proteins. Sometimes they're called the workhorses of biology.
link |
And if you look into them, and I've, you know, obviously, as part of alpha fold, I've been
link |
researching proteins and structural biology for the last few years, you know, they're amazing little
link |
bio nanomachines proteins. They're incredible if you actually watch little videos of how they work,
link |
animations of how they work. And proteins are specified by their genetic sequence called the
link |
amino acid sequence. So you can think of it as their genetic makeup. And then in the body,
link |
in nature, they, when they, when they fold up into a 3d structure, so you can think of it as a
link |
string of beads, and then they fold up into a ball. Now the key thing is you want to know what that
link |
3d structure is, because the structure, the 3d structure of a protein is what helps to determine
link |
what does it do, the function it does in your body. And also, if you're interested in drug drugs or
link |
disease, you need to understand that 3d structure, because if you want to target something with a
link |
drug compound, about to block something the proteins doing, you need to understand where it's
link |
going to bind on the surface of the protein. So obviously, in order to do that, you need to
link |
understand the 3d structure. So the structure is mapped to the function? The structure is mapped to
link |
the function. And the structure is obviously somehow specified by the, by the amino acid sequence.
link |
And that's the, in essence, the protein folding problem is, can you just from the amino acid
link |
sequence, the one dimensional string of letters, can you immediately computationally predict
link |
the 3d structure? Right. And this has been a grand challenge in biology for over 50 years.
link |
So I think it was first articulated by Christian Anfinsen, a Nobel Prize winner in 1972,
link |
as part of his Nobel Prize winning lecture. And he just speculated, this should be possible
link |
to go from the amino acid sequence to the 3d structure. But he didn't say how. So it was,
link |
you know, it's been described to me as equivalent to Fermat's last theorem, but for biology.
link |
Right. You should, as somebody that very well might win the Nobel Prize in the future,
link |
but outside of that, you should do more of that kind of thing. In the margin,
link |
just put random things. That will take like 200 years to solve.
link |
Set people off for 200 years.
link |
It should be possible.
link |
And just don't give any details.
link |
Exactly. I think everyone's, exactly. It should be, I'll have to remember that for future.
link |
So yeah. So he set off, you know, with this one throwaway remark, just like Fermat, you know,
link |
he, he set off this whole 50 years field, really, of computation of biology.
link |
And, and they had, you know, they got stuck. They hadn't really got very far with doing this.
link |
And, and until now, until Alpha fold came along, this has done experimentally, right,
link |
very painstakingly. So the rule of thumb is, and you have to like crystallize the protein,
link |
which is really difficult. Some proteins can't be crystallized like membrane proteins.
link |
And then you have to use very expensive electron microscopes or x ray crystallography machines,
link |
really painstaking work to get the 3D structure and visualize the 3D structure.
link |
So the rule of thumb in, in, in experimental biology is that it takes one PhD student,
link |
their entire PhD, to do one protein. And without for full two, we were able to predict the 3D
link |
structure in a matter of seconds. And so we were, you know, over Christmas, we did the whole human
link |
proteome or every protein in the human body will 20,000 proteins. So the human proteomes like the
link |
equivalent of the human genome, but on protein space, and, and sort of revolutionize really what
link |
a structural biologist can do. Because now they don't have to worry about these painstaking
link |
experimental, you know, should they put all of that effort in or not, they can almost just look
link |
up the structure of their proteins like a Google search. And so there's a data set on which it's
link |
trained and how to map this amino acid sequence. First of all, it's incredible that approaching
link |
this little chemical computer is able to do that computation itself in some kind of distributed way
link |
and do it very quickly. That's a weird thing. And they evolved that way. Because, you know, in the
link |
beginning, I mean, that's a great invention, just the protein itself. Yes. I mean, and then there's,
link |
I think, probably a history of like, they evolved to have many of these proteins. And those proteins
link |
figure out how to be computers themselves, in such a way that you can create structures that
link |
can interact in complexes with each other in order to form high level functions. I mean,
link |
it's a weird system that they've figured it out. Well, for sure. I mean, we, you know, maybe we
link |
should talk about the origins of life too. But proteins themselves, I think, are magical and
link |
incredible, as I said, little, little bio nanomachines. And, and, and actually, Leventhal,
link |
who was another scientist, a contemporary of Anfinson, he coined this Leventhal, what became
link |
known as Leventhal's paradox, which is exactly what you're saying. He calculated roughly an
link |
average protein, which is maybe 2000 amino acids base as long, is, is, is can fold in maybe 10 to
link |
the power 300 different conformations. So there's 10 to the power 300 different ways that protein
link |
could fold up. And yet somehow, in nature, physics solves this, solves this in a matter of milliseconds.
link |
So proteins fold up in your body in, you know, sometimes in fractions of a second. So physics
link |
is somehow solving that search problem. And just to be clear, in many of these cases, maybe you
link |
correct me if I'm wrong, there's often a unique way for that sequence to form itself. So among
link |
that huge number of possibilities, it figures out a way how to stably, in some cases, there's
link |
might be a misfunction, so on, which leads to a lot of the disorders and stuff like that. But
link |
most of the time it's a unique mapping. And that unique mapping is not obvious.
link |
No, exactly. Which is what the problem is. Exactly. So there's a unique mapping, usually,
link |
in a healthy, if it's healthy. And as you say, in disease, so for example, Alzheimer's, one, one,
link |
one conjecture is that it's because of misfolder protein, protein that folds in the wrong way,
link |
amyloid beta protein. So, and then because it folds in the wrong way, it gets tangled up,
link |
right, in your, in your neurons. So it's super important to understand both healthy functioning
link |
and also disease is to understand what these things are doing and how they're structuring.
link |
Of course, the next step is sometimes proteins change shape when they interact with something.
link |
So they're not just static necessarily in biology.
link |
Maybe you can give some interesting, sort of beautiful things to you about these early days
link |
of alpha fold of solving this problem because unlike games, this is real physical systems that are
link |
less amenable to self play type of mechanisms. The size of the data set is smaller that you
link |
might otherwise like. So you have to be very clever about certain things. Is there something you
link |
could speak to what was very hard to solve and what are some beautiful aspects about the solution?
link |
Yeah, I would say alpha fold is the most complex and also probably most meaningful system we've
link |
built so far. So it's been amazing time actually in the last, you know, two, three years to see
link |
that come through because as we talked about earlier, you know, games is what we started on
link |
building things like AlphaGo and AlphaZero. But really the ultimate goal was to not just to crack
link |
games, it was just to build, use them to bootstrap general learning systems we could then apply to
link |
real world challenges. Specifically, my passion is scientific challenges like protein folding.
link |
And then alpha fold, of course, is our first big proof point of that. And so, you know, in terms of
link |
the data and the amount of innovations that had to go into it, we, you know, it was like
link |
more than 30 different component algorithms needed to be put together to crack the protein folding.
link |
I think some of the big innovations were that kind of building in some hard coded constraints
link |
around physics and evolutionary biology to constrain sort of things like the bond angles
link |
in the protein and things like that, a lot, but not to impact the learning system. So still
link |
allowing the system to be able to learn the physics itself from the examples that we had.
link |
And the examples, as you say, there are only about 150,000 proteins, even after 40 years of
link |
experimental biology, only around 150,000 proteins have been the structures have been found out about.
link |
So that was our training set, which is much less than normally we would like to use. But using
link |
various tricks, things like self distillation, so actually using alpha fold predictions,
link |
some of the best predictions that it thought was highly confident in, we put them back into the
link |
training set, right, to make the training set bigger. That was critical to alpha fold working.
link |
So there was actually a huge number of different innovations like that that were required to
link |
ultimately crack the problem. Alpha fold one, what it produced was a histogram. So a kind of a
link |
matrix of the pairwise distances between all of the molecules in the protein. And then there had
link |
to be a separate optimization process to create the 3D structure. And what we did for alpha fold
link |
two is make it truly end to end. So we went straight from the amino acid sequence of bases to the
link |
3D structure directly without going through this intermediate step. And in machine learning,
link |
what we've always found is that the more end to end you can make it, the better the system.
link |
And it's probably because in the end, the system is better at learning what the constraints are
link |
than we are as the human designers of specifying it. So anytime you can let it flow end to end
link |
and actually just generate what it is you're really looking for, in this case, the 3D structure,
link |
you're better off than having this intermediate step, which you then have to handcraft the next
link |
step for. So it's better to let the gradients and the learning flow all the way through the system
link |
from the endpoint, the end output you want to the inputs.
link |
So that's a good way to start on a new problem. Handcraft a bunch of stuff, add a bunch of manual
link |
constraints with a small learning piece and grow that learning piece until it consumes the whole
link |
thing. That's right. And so you can also see, you know, this is a bit of a method we've developed
link |
over doing many sort of successful alphas, we call them alpha X projects, right? And the easiest
link |
way to see that is the evolution of AlphaGo to AlphaZero. So AlphaGo was a learning system,
link |
but it was specifically trained to only play Go, right? So and what we wanted to do with
link |
the first version of AlphaGo is just get to world champion performance, no matter how we did it,
link |
right? And then, and then of course, AlphaGo zero, we, we, we remove the need to use human
link |
games as a starting point, right? So it could just play against itself from random starting
link |
point from the beginning. So that removed the need for human knowledge about Go. And then finally,
link |
AlphaZero then generalized it so that any things we had in there, the system, including things like
link |
symmetry of the Go board, were removed. So the AlphaZero could play from scratch any two player
link |
game. And then MuZero, which is the final, our latest version of that set of things, was then
link |
extending it so that you didn't even have to give it the rules of the game. It would learn that for
link |
itself. So it could also deal with computer games as well as board games. So that line of AlphaGo,
link |
AlphaGo zero, AlphaZero, MuZero, that's the full trajectory of what you can take from
link |
imitation learning to full self supervised learning. Yeah, exactly. And learning, learning
link |
the entire structure of the environment you put in from scratch, right? And, and, and bootstrapping
link |
it through self play yourself. But the thing is, it would have been impossible, I think, or very
link |
hard for us to build AlphaZero or MuZero first out of the box. Even psychologically, because you
link |
have to believe in yourself for a very long time, you're constantly dealing with doubt because a
link |
lot of people say that it's impossible. Exactly. So it's hard enough just to do Go, as you were
link |
saying, everyone thought that was impossible, or at least a decade away from when we, when we
link |
did it back in 2015, 2014, you know, 2016. And, and so, yes, it would have been psychologically,
link |
probably very difficult, as well as the fact that of course, we learn a lot by building AlphaGo
link |
first. Right. So it's, I think this is why I call AI an engineering science. It's one of the most
link |
fascinating science disciplines. But it's also an engineering science in the sense that, unlike
link |
unlike natural sciences, the phenomenon you're studying, it doesn't exist out in nature. You
link |
have to build it first. So you have to build the artifact first, and then you can study how, how,
link |
and pull it apart and how it works. This is tough to ask you this question, because you probably
link |
will say it's everything. But let's, let's try, let's try to think through this, because you're
link |
in a very interesting position where deep mind is a place of some of the most brilliant ideas in
link |
the history of AI, but it's also a place of brilliant engineering. So how much of solving
link |
intelligence, this big goal for deep mind, how much of it is science? How much is engineering?
link |
So how much is the algorithms? How much is the data? How much is the hardware compute infrastructure?
link |
How much is the software compute infrastructure? Yeah. What else is there? How much is the human
link |
infrastructure? And like just the humans interacting certain kinds of ways in all the space of all
link |
those ideas? And how much is maybe like philosophy? How much, what's the key? If you were to sort of
link |
look back, like if we go forward 200 years and look back, what was the key thing that solved
link |
intelligence? Is it the ideas or the engineering? I think it's a combination. I first of all,
link |
of course, it's a combination of all those things, but the ratios of them changed over time.
link |
Right. So even in the last 12 years, we started deep mind in 2010, which is hard to imagine now
link |
because 2010, it's only 12 short years ago, but nobody was talking about AI. I don't even remember
link |
back to your MIT days. No one was talking about it. I did a postdoc at MIT back around then,
link |
and it was sort of thought of as, well, look, we know AI doesn't work. We tried this hard in the
link |
90s at places like MIT, mostly using logic systems and old fashioned sort of good old
link |
fashioned AI, we would call it now. People like Minsky and Patrick Winston, and you know all
link |
these characters, right? And used to debate a few of them. And they used to think I was mad
link |
thinking about that some new advance could be done with learning systems. I was actually pleased
link |
to hear that because at least you know, you're on a unique track at that point, right? Even if
link |
all of your professors are telling you you're mad. And of course, in industry, we couldn't get as
link |
difficult to get two cents together, which is hard to imagine now as well, given that it's the
link |
biggest sort of buzzword in VCs and fundraising is easy and all these kind of things today.
link |
So back in 2010, it was very difficult. And the reason we started then and Shane and I used to
link |
discuss what were the sort of founding tenants of DeepMind. And it was various things. One was
link |
algorithmic advances. So deep learning, you know, Jeff Hinton and Co had just sort of invented
link |
that in academia, but no one in industry knew about it. We love reinforcement learning. We
link |
thought that could be scaled up. But also understanding about the human brain had advanced
link |
quite a lot in the decade prior with fMRI machines and other things. So we could get some
link |
good hints about architectures and algorithms and sort of representations maybe that the brain uses.
link |
So at a systems level, not at a implementation level. And then the other big things were compute
link |
and GPUs, right? So we could see a compute was going to be really useful and it got to a place
link |
where it become commoditized mostly through the games industry. And that could be taken advantage
link |
of. And then the final thing was also mathematical and theoretical definitions of intelligence.
link |
So things like AIXI, AIXE, which Shane worked on with his supervisor Marcus Hutto, which is
link |
this sort of theoretical proof, really, of universal intelligence, which is actually a
link |
reinforcement learning system in the limit. I mean, it assumes infinite compute and infinite
link |
memory in the way, you know, like a Turing machine proves. But I was also waiting to see
link |
something like that to, you know, like Turing machines and computation theory that people
link |
like Turing and Shannon came up with underpins modern computer science. You know, I was waiting
link |
for a theory like that to sort of underpin AGI research. So when I met Shane and saw he was
link |
working on something like that, you know, that to me was a sort of final piece of the jigsaw.
link |
So in the early days, I would say that ideas were the most important. You know, for us,
link |
it was deep reinforcement learning, scaling up deep learning. Of course, we've seen transformers.
link |
So huge leaps, I would say, like three or four from, if you think from 2010 till now, huge
link |
evolutions, things like AlphaGo. And maybe there's a few more still needed. But as we get closer to
link |
AI, AGI, I think engineering becomes more and more important and data. Because scale and of
link |
course the recent results of GPT3 and all the big language models and large models, including our
link |
ones, has shown that scale is and large models are clearly going to be unnecessary, but perhaps
link |
not sufficient part of an AGI solution. And throughout that, like you said, and I'd like
link |
to give you a big thank you, you're one of the pioneers in this is sticking by ideas like
link |
reinforcement learning, that this can actually work, given actually limited success in the past.
link |
And also, which we still don't know, but proudly, having the best researchers in the world
link |
and talking about solving intelligence. So talking about whatever you call it, AGI or
link |
something like this, that speaking of MIT, that's just something you wouldn't bring up.
link |
No, not maybe you did in like 40, 50 years ago. But that was AI was a place where you do tinkering,
link |
very small scale, not very ambitious projects. And maybe the biggest ambitious projects were in
link |
the space of robotics and doing like the DARPA challenge. But the task of solving intelligence
link |
and believing you can, that's really, really powerful. So in order for engineering to do its work,
link |
to have great engineers build great systems, you have to have that belief, that threads
link |
throughout the whole thing that you can actually solve some of these impossible challenges.
link |
Yeah, that's right. And back in 2010, our mission statement, and still is today,
link |
is it was used to be solving step one, solve intelligence, step two, use it to solve everything
link |
else. So if you can imagine pitching that to a VC in 2010, the kind of looks we got, we managed to
link |
find a few kooky people to back us. But it was tricky. And it got to the point where we wouldn't
link |
mention it to any of our professors, because they would just eye roll and think we committed
link |
career suicide. And so it was a lot of things that we had to do. But we always believed it.
link |
And one reason, by the way, one reason I've always believed in reinforcement learning is that, if
link |
you look at neuroscience, that is the way that the primate brain learns. One of the main mechanisms
link |
is the dopamine system implements some form of TD learning. It's a very famous result in the late
link |
90s, where they saw this in monkeys, and as a propagating prediction error. So again, in the
link |
limit, this is what I think you can use neuroscience for is, in any mathematics, when you're doing
link |
something as ambitious as trying to solve intelligence, and it's blue sky research, no one
link |
knows how to do it, you need to use any evidence or any source of information you can to help guide
link |
you in the right direction or give you confidence you're going in the right direction. So that was
link |
one reason we pushed so hard on that. And it's just going back to your earlier question about
link |
organization. The other big thing that I think we innovated with at DeepMind to encourage invention
link |
and innovation was the multidisciplinary organization we built, and we still have today.
link |
So DeepMind originally was a confluence of the most cutting edge knowledge in neuroscience
link |
with machine learning, engineering, and mathematics, and gaming. And then since then,
link |
we've built that out even further. So we have philosophers here and by ethicists, but also
link |
other types of scientists, physicists, and so on. And that's what brings together, I tried to build a
link |
sort of new type of Bell Labs, but in its golden era, right? And a new expression of that, to try
link |
and foster this incredible sort of innovation machine. So talking about the humans in the machine,
link |
DeepMind itself is a learning machine with a lot of amazing human minds in it, coming together to
link |
try and build these learning systems. If we return to the big ambitious dream of AlphaFold,
link |
that may be the early steps on a very long journey in biology,
link |
do you think the same kind of approach can use to predict the structure and function of more
link |
complex biological systems, so multi protein interaction? And then, I mean, you can go out
link |
from there, just simulating bigger and bigger systems that eventually simulate something like
link |
the human brain or the human body, just the big mush, the mess of the beautiful resilient mess
link |
of biology. Do you see that as a long term vision? I do. And I think, if you think about what are
link |
the top things I wanted to apply AI to once we had powerful enough systems, biology and curing
link |
diseases and understanding biology was right up there, top of my list. That's one of the reasons
link |
I personally pushed that myself and with AlphaFold. But I think AlphaFold, amazing as it is,
link |
is just the beginning. And I hope it's evidence of what could be done with computational methods.
link |
So AlphaFold solved this huge problem of the structure of proteins, but biology is dynamic.
link |
So really, what I imagined from here, and we're working on all these things now, is protein
link |
protein interaction, protein ligand binding, so reacting with molecules, then you want to
link |
get built up to pathways, and then eventually a virtual cell. That's my dream, maybe in the next
link |
10 years. And I've been talking actually to a lot of biologists, friends of mine, Paul Nurse,
link |
who runs the Crick Institute, amazing biologists, Nobel Prize winning biologists, we've been discussing
link |
for 20 years now, virtual cells. Could you build a virtual simulation of a cell? And if you
link |
could, that would be incredible for biology and disease discovery, because you could do loads of
link |
experiments on the virtual cell, and then only at the last stage validate it in the wet lab.
link |
So in terms of the search space of discovering new drugs, it takes 10 years roughly to go from
link |
identifying a target to having a drug candidate. Maybe that could be shortened by an order of
link |
magnitude, if you could do most of that work in silico. So in order to get to a virtual cell, we
link |
have to build up understanding of different parts of biology and the interactions. And so every
link |
few years we talk about this, I talked about this with Paul, and then finally, last year,
link |
after AlphaFold, I said, now's the time, we can finally go for it. And AlphaFold's the first
link |
proof point that this might be possible. And he's very exciting, we have some collaborations
link |
with his lab, they're just across the road actually from us as wonderful being here in Kings Cross
link |
with the Crick Institute across the road. And I think the next steps, I think there's going
link |
to be some amazing advances in biology built on top of things like AlphaFold. We're already seeing
link |
that with the community doing that after we've open sourced it and released it. And I often say
link |
that I think, if you think of mathematics, is the perfect description language for physics.
link |
I think AI might end up being the perfect description language for biology, because
link |
biology is so messy, it's so emergent, so dynamic and complex. I find it very hard to
link |
believe we'll ever get to something as elegant as Newton's Laws of Motions to describe a cell,
link |
right? It's just too complicated. So I think AI is the right tool for this.
link |
You have to start at the basic building blocks and use AI to run the simulation
link |
for all those building blocks. So have a very strong way to do prediction of what given these
link |
building blocks, what kind of biology, how the function and the evolution of that biological
link |
system. It's almost like a cellular automata. You have to run it. You can't analyze it from a high
link |
level. You have to take the basic ingredients, figure out the rules and let it run. But in this
link |
case, the rules are very difficult to figure out. You have to learn them. That's exactly it. So the
link |
biology is too complicated to figure out the rules. It's too emergent, too dynamic, say, compared
link |
to a physics system, like the motion of a planet. And so you have to learn the rules. And that's
link |
exactly the type of systems that we're building. So you mentioned you've open sourced Alpha Fold
link |
and even the data involved. To me, personally, also really happy and a big thank you for open
link |
sourcing Majoko, the physics simulation engine that's often used for robotics research and so on.
link |
So I think that's a pretty gangster move. So very few companies or people do that kind of thing.
link |
What's the philosophy behind that? It's a case by case basis. And in both those cases, we felt
link |
that was the maximum benefit to humanity to do that. And the scientific community,
link |
in one case, the robotics physics community with Majoko. We purchased it for open sourcing.
link |
Yes, we purchased it for the express principle to open sourcing. So I hope people appreciate
link |
that. It's great to hear that you do. And then the second thing was, and mostly we did it because
link |
the person building it was not able to cope with supporting it anymore because it got too big for
link |
him. He's an amazing professor who built it in the first place. So we helped him out with that.
link |
And then with Alpha Fold is even bigger, I would say. And I think in that case,
link |
we decided that there were so many downstream applications of Alpha Fold that we couldn't
link |
possibly even imagine what they all were. So the best way to accelerate drug discovery and also
link |
fundamental research would be to give all that data away and the system itself.
link |
It's been so gratifying to see what people have done that within just one year,
link |
which is a short amount of time in science. And it's been used by over 500,000 researchers have
link |
used it. We think that's almost every biologist in the world. I think there's roughly 500,000
link |
biologists in the world, professional biologists, have used it to look at their proteins of interest.
link |
We've seen amazing fundamental research done. So a couple of weeks ago, there was a whole
link |
special issue of science, including the front cover, which had the nuclear pore complex on it,
link |
which is one of the biggest proteins in the body. The nuclear pore complex is
link |
a protein that governs all the nutrients going in and out of your cell nucleus.
link |
So they're like little gateways that open and close to let things go in and out of your cell
link |
nucleus. So they're really important. But they're huge because they're massive doughnut ring shaped
link |
things. And they've been looking to try and figure out that structure for decades. And they have
link |
lots of experimental data, but it's too low resolution. There's bits missing. And they were
link |
able to, like a giant Lego jigsaw puzzle, use alpha fold predictions plus experimental data
link |
and combined those two independent sources of information, actually four different groups
link |
around the world were able to put it together more or less simultaneously using alpha fold
link |
predictions. So that's been amazing to see. And pretty much every pharma company, every drug
link |
company executive I've spoken to has said that their teams are using alpha fold to accelerate
link |
whatever drugs they're trying to discover. So I think the knock on effect has been enormous
link |
in terms of the impact that alpha fold has made. And it's probably bringing in,
link |
it's creating biologists, it's bringing more people into the field, both on the excitement and
link |
both on the technical skills involved. And it's almost like a gateway drug to biology.
link |
Yes, it is. And more computational people involved too, hopefully. And I think for us,
link |
you know, the next stage, as I said, you know, in future, we have to have other considerations too.
link |
We're building on top of alpha fold and these other ideas I discussed with you about protein,
link |
protein interactions and genomics and other things. And not everything will be open source.
link |
Some of it will do commercially because that will be the best way to actually get the most
link |
resources and impact behind it. In other ways, some other projects will do non profit style.
link |
And also we have to consider for future things as well, safety and ethics as well,
link |
like synthetic biology, there is dual use. And we have to think about that as well. With alpha
link |
fold, we consulted with 30 different bioethicists and other people expert in this field to make
link |
sure it was safe before we released it. So there'll be other considerations in future. But for
link |
right now, I think alpha fold is a kind of a gift from us to the scientific community.
link |
So I'm pretty sure that something like alpha fold would be part of Nobel prizes in the future.
link |
But us humans, of course, are horrible with credit assignment. So we'll of course give it to the
link |
humans. Do you think there will be a day when AI system can't be denied that it earned that
link |
Nobel prize? Do you think we will see that in 21st century?
link |
It depends what type of AI as we end up building, right? Whether they're
link |
goal seeking agents who specifies the goals, who comes up with the hypotheses, who determines
link |
which problems to tackle, right? So I think it's about announcement.
link |
Yes, it's about results exactly as part of it. So I think right now, of course, it's amazing human
link |
ingenuity that's behind these systems. And then the system, in my opinion, is just a tool. It would
link |
be a bit like saying with Galileo and his telescope, you know, the ingenuity that the credit should go
link |
to the telescope. I mean, it's clearly Galileo building the tool which he then uses. So I still
link |
see that in the same way today, even though these tools learn for themselves. I think of
link |
things like alpha fold and the things we're building as the ultimate tools for science
link |
and for acquiring new knowledge to help us as scientists acquire new knowledge.
link |
I think one day there will come a point where an AI system may solve or come up with something like
link |
general relativity of its own bat, not just by averaging everything on the internet or averaging
link |
everything on PubMed. Although that would be interesting to see what that would come up with.
link |
So that to me is a bit like our earlier debate about creativity, you know, inventing go,
link |
rather than just coming up with a good go move. And so I think solving, I think to, you know,
link |
if we wanted to give it the credit of like a Nobel type of thing, then it would need to invent go
link |
and sort of invent that new conjecture out of the blue, rather than being specified by the
link |
human scientists or the human creators. So I think right now that's, it's definitely just a tool.
link |
Although it is interesting how far you get by averaging everything on the internet, like you
link |
said, because, you know, a lot of people do see science as you're always standing on the shoulders
link |
of giants. And the question is how much are you really reaching up above the shoulders of giants?
link |
Maybe it's just assimilating different kinds of results of the past with ultimately this new
link |
perspective that gives you this breakthrough idea. But that idea may not be novel in the way that
link |
it can't be already discovered on the internet. Maybe the Nobel prizes of the next hundred years
link |
are already all there on the internet to be discovered. They could be. They could be. I mean,
link |
I think this is one of the big mysteries, I think, is that I, first of all, I believe a lot of the
link |
big new breakthroughs that are going to come in the next few decades. And even in the last decade
link |
are going to come at the intersection between different subject areas, where there'll be some
link |
new connection that's found between what seemingly were disparate areas. And one can even think of
link |
deep mind, as I said earlier, as a sort of interdiscipline between neuroscience ideas and AI
link |
engineering ideas originally. And so I think there's that. And then one of the things we can't
link |
imagine today is, and one of the reasons I think people, we were so surprised by how well large
link |
models worked is that actually, it's very hard for our human minds, our limited human minds to
link |
understand what it would be like to read the whole internet, right? I think we can do a thought
link |
experiment. And I used to do this of like, well, what if I read the whole of Wikipedia? What would
link |
I know? And I think our minds can just about comprehend maybe what that would be like, but
link |
the whole internet is beyond comprehension. So I think we just don't understand what it would be
link |
like to be able to hold all of that in mind, potentially, right? And then active at once.
link |
And then maybe what are the connections that are available there? So I think no doubt there are huge
link |
things to be discovered just like that. But I do think there is this other type of creativity of
link |
true spark of new knowledge, new idea never thought before about can't be averaged from things that
link |
are known, that really, of course, everything come, you know, nobody creates in a vacuum. So
link |
there must be clues somewhere. But just a unique way of putting those things together, I think
link |
some of the greatest scientists in history have displayed that I would say, although it's very
link |
hard to know, going back to their time, what was exactly known when they came up with those things.
link |
Although you're making me really think because just the thought experiment
link |
of deeply knowing 100 Wikipedia pages, I don't think I can. I've been really impressed by Wikipedia
link |
for technical topics. So if you know 100 pages or 1000 pages, I don't think we can
link |
truly comprehend what kind of intelligence that is. It's a pretty powerful. If you know how to
link |
use that and integrate that information correctly, I think you can go really far. You can probably
link |
construct thought experiments based on that, like simulate different ideas. So if this is true,
link |
let me run this thought experiment that maybe this is true. It's not really invention. It's
link |
like just taking literally the knowledge and using it to construct a very basic simulation of the
link |
world. I mean, some argue it's romantic in part, but Einstein would do the same kind of things
link |
with thought experiments. Yeah, one could imagine doing that systematically across millions of
link |
Wikipedia pages, plus PubMed, all these things. I think there are many, many things to be discovered
link |
like that that are hugely useful. You could imagine, and I want us to do some of these things in
link |
material science, like room temperature superconductors or something on my list one day that I'd
link |
like to have an AI system to help build better optimized batteries. All of these sort of mechanical
link |
things, I think a systematic sort of search could be guided by a model, could be extremely
link |
powerful. So speaking of which, you have a paper on nuclear fusion, magnetic control of
link |
tachymic plasmus through deeper enforcement learning. So you're seeking to solve nuclear fusion
link |
with deep RL, so it's doing control of high temperature plasmas. Can you explain this work
link |
and can AI eventually solve nuclear fusion? It's been very fun last year or two and very productive
link |
because we've been ticking off a lot of my dream projects, if you like, of things that I've collected
link |
over the years of areas of science that I would like to, I think could be very transformative
link |
if we helped accelerate and are really interesting problems, scientific challenges in of themselves.
link |
This is energy. So energy, yes, exactly. So energy and climate. So we talked about disease and
link |
biology as being one of the biggest places I think AI can help with. I think energy and climate
link |
is another one. So maybe they would be my top two. And fusion is one area I think AI can help with.
link |
Now, fusion has many challenges, mostly physics and material science and engineering challenges
link |
as well to build these massive fusion reactors and contain the plasma. And what we try to do,
link |
and whenever we go into a new field to apply our systems is we look for, we talk to domain experts,
link |
we try and find the best people in the world to collaborate with. In this case, in fusion,
link |
we collaborate with EPFL in Switzerland, the Swiss Technical Institute, who are amazing.
link |
They have a test reactor. They were willing to let us use, which I double checked with the team,
link |
we were going to use carefully and safely. I was impressed. They managed to persuade them to let us
link |
use it. And it's an amazing test reactor they have there. And they try all sorts of pretty crazy
link |
experiments on it. And what we tend to look at is if we go into a new domain like fusion,
link |
what are all the bottleneck problems? Thinking from first principles, what are all the
link |
bottleneck problems that are still stopping fusion working today? And then we get a fusion expert to
link |
tell us. And then we look at those bottlenecks and we look at the ones which ones are amenable
link |
to our AI methods today. And we'd be interesting from a research perspective, from our point of
link |
view, from an AI point of view. And that would address one of their bottlenecks. And in this
link |
case, plasma control was perfect. So the plasma, it's a million degrees Celsius, something like
link |
that's hotter than the sun. And there's obviously no material that can contain it. So they have to
link |
be containing these magnetic, very powerful superconducting magnetic fields. But the problem
link |
is plasma is pretty unstable, as you imagine. You're kind of holding a mini sun, mini star
link |
in a reactor. So you kind of want to predict ahead of time what the plasma is going to do,
link |
so you can move the magnetic field within a few milliseconds to basically contain what it's going
link |
to do next. So it seems like a perfect problem if you think of it for a reinforcement learning
link |
prediction problem. So you've got a controller, you've got to move the magnetic field. And until
link |
we came along, they were doing it with traditional operational research type of controllers,
link |
which are kind of handcrafted. And the problem is, of course, they can't react in the moment to
link |
something the plasma is doing. They have to be hard coded. And again, knowing that that's
link |
normally our go to solution is we would like to learn that instead. And they also had a simulator
link |
of these plasma. So there were lots of criteria that matched what we like to use.
link |
So can AI eventually solve nuclear fusion?
link |
Well, so with this problem, and we published it in Nature paper last year, we held the
link |
fusion that we held the plasma in a specific shapes. So actually, it's almost like carving the
link |
plasma into different shapes and hold it there for a record amount of time. So that's one of the
link |
problems of fusion sort of solved. So have a controller that's able to, no matter the shape,
link |
contain it. Yeah, contain it and hold it in structure. And there's different shapes that are
link |
better for the energy productions called droplets and so on. So that was huge. And now we're looking,
link |
we're talking to lots of fusion startups to see what's the next problem we can tackle in the fusion
link |
area. So another fascinating place in a paper title, pushing the frontiers of density functionals
link |
by solving the fractional electron problem. So you're taking on modeling and simulating the
link |
quantum mechanical behavior of electrons. Can you explain this work and can AI model and simulate
link |
arbitrary quantum mechanical systems in the future? Yeah, so this is another problem I've had my eye on
link |
for decade or more, which is sort of simulating the properties of electrons. If you can do that,
link |
you can basically describe how elements and materials and substances work. So it's kind
link |
of like fundamental if you want to advance material science. And we have Schrodinger's
link |
equation and then we have approximations to that density functional theory. These things are
link |
famous. And people try and write approximations to these to these functionals and kind of come
link |
up with descriptions of the electron clouds, where they're going to go, how they're going to
link |
interact when you put two elements together. And what we try to do is learn a simulation,
link |
learn a functional that will describe more chemistry types of chemistry. So until now,
link |
you know, you can run expensive simulations, but then you can only simulate very small molecules,
link |
very simple molecules. We would like to simulate large materials. And so today there's no way of
link |
doing that. And we're building up towards building functionals that approximate Schrodinger's equation
link |
and then allow you to describe what the electrons are doing. And all material sort of science and
link |
material properties are governed by the electrons and how they interact. So have a good summarization
link |
of the simulation through the functional, but one that is still close to what the actual simulation
link |
will come out with. So what, how difficult is that task? What's involved in that task? Is it
link |
running those those complicated simulations and learning the task of mapping from the initial
link |
conditions and the parameters of the simulation, learning what the functional would be? Yeah.
link |
So it's pretty tricky. And we've done it with, you know, the nice thing is we there are we can run a
link |
lot of the simulations, the molecular dynamic simulations on our compute clusters. And so that
link |
generates a lot of data. So in this case, the data is generated. So we like those sort of systems,
link |
and that's why we use games, it's simulator generator data. And we can kind of create as much of it as
link |
we want really. And just let's leave some, you know, if any computers are free in the cloud,
link |
we just run, we run some of these calculations, right compute cluster calculation.
link |
The free compute time is used up on quantum mechanics. Yeah, quantum mechanics, exactly,
link |
simulations and protein simulations and other things. And so, and so, you know, when you're
link |
not searching on YouTube for video, cat videos, we're using those computers usefully and quantum
link |
chemistry, the idea. Fine. And, and putting them to good use. And then, yeah, and then all of that
link |
computational data that's generated, we can then try and learn the functionals from that,
link |
which of course are way more efficient. Once we learn the functional, then running those
link |
simulations would be. Do you think one day AI may allow us to do something like basically crack open
link |
physics, so do something like travel faster than the speed of light? My ultimate aim has always
link |
been with AI is the reason I am personally working on AI for my whole life, it was to build a tool
link |
to help us understand the universe. So I wanted to, and that means physics, really, and the nature
link |
of reality. So I don't think we have systems that are capable of doing that yet. But when we get
link |
towards AGI, I think that's one of the first things I think we should apply AGI to. I would
link |
like to test the limits of physics and our knowledge of physics. There's so many things we
link |
don't know. This is one thing I find fascinating about science. And, you know, as a huge proponent
link |
of the scientific method is being one of the greatest ideas humanities ever had and allowed
link |
us to progress with our knowledge. What I think is a true scientist, I think what you find is
link |
the more you find out, the more you realize we don't know. And I always think that it's surprising
link |
that more people aren't troubled. You know, every night I think about all these things we interact
link |
with all the time, that we have no idea how they work, time, consciousness, gravity, life.
link |
These are all the fundamental things of nature. We don't really know what they are.
link |
To live life, we pin certain assumptions on them and kind of treat our assumptions as if
link |
they're a fact that allows us to sort of box them off somehow. Yeah, box them off somehow.
link |
But the reality is when you think of time, you should remind yourself, you should
link |
take it off the shelf and realize like, no, we have a bunch of assumptions. There's even
link |
not a lot of debate. There's a lot of uncertainty about exactly what is time.
link |
Is there an error of time? You know, there's a lot of fundamental questions that you can't
link |
just make assumptions about. And maybe AI allows you to not put anything on the shelf.
link |
Not make any hard assumptions and really open it up and see what's going on.
link |
Exactly. I think we should be truly open minded about that. And exactly that,
link |
not be dogmatic to a particular theory. It'll also allow us to build better tools,
link |
experimental tools eventually that can then test certain theories that may not be testable today.
link |
As things about what we spoke about at the beginning, about the computational nature
link |
of the universe, if that was true, how one might go about testing that. And there are people who've
link |
conjectured people like Scott Aronson and others about how much information can a specific
link |
specific Planck unit of space and time contain. So one might be able to think about testing those
link |
ideas if you had AI helping you build some new exquisite experimental tools. This is what I
link |
imagine many decades from now will be able to do.
link |
And what kind of questions can be answered through running a simulation of them? There's
link |
a bunch of physics simulations you can imagine that could be run in some kind of efficient way,
link |
much like you're doing in the quantum simulation work.
link |
And perhaps even the origin of life. So figuring out how going even back before
link |
the work of AlphaFold begins of how this whole thing emerges from a rock.
link |
From a static thing. Do you think AI will allow us to, is that something you have your eye on?
link |
Is trying to understand the origin of life? First of all, yourself, what do you think?
link |
How the heck did life originate on Earth?
link |
Yeah. Well, maybe I'll come to that in a second. But I think the ultimate use of AI is to kind
link |
of use it to accelerate science to the maximum. So I think of it a little bit like the tree of
link |
all knowledge. If you imagine that's all the knowledge there is in the universe to attain.
link |
And we sort of barely scratch the surface of that so far. And even though we've done pretty
link |
well since the Enlightenment as humanity. And I think AI will turbocharge all of that,
link |
like we've seen with AlphaFold. And I want to explore as much of that tree of knowledge as
link |
it's possible to do. And I think that involves AI helping us with understanding or finding patterns,
link |
but also potentially designing and building new tools, experimental tools.
link |
So I think that's all and also running simulations and learning simulations.
link |
All of that, we're sort of doing at a baby steps level here. But I can imagine that
link |
in the decades to come as what's the full flourishing of that line of thinking? It's
link |
going to be truly incredible, I would say. If I visualize this tree of knowledge,
link |
something tells me that that tree of knowledge for humans is much smaller. In a set of all
link |
possible trees of knowledge, it's actually quite small, given our cognitive limitations,
link |
limited cognitive capabilities, that even with the tools we build, we still won't be able to
link |
understand a lot of things. And that's perhaps what non human systems might be able to reach
link |
further, not just as tools, but in themselves, understanding something that they can bring
link |
back. Yeah, it could well be. So I mean, there's so many things that are sort of encapsulated
link |
in what you just said there. I think first of all, there's two different things that's like,
link |
what do we understand today? What could the human mind understand? And what is the totality of what
link |
is there to be understood? And so there's three concentric, you can think of them as three larger
link |
and larger trees or exploring more branches of that tree. And I think with AI, we're going to
link |
explore that whole lot. Now, the question is, if you think about what is the totality of what
link |
could be understood, there may be some fundamental physics reasons why certain things can't be
link |
understood, like what's outside a simulation or outside the universe. Maybe it's not understandable
link |
from within the universe. So there may be some hard constraints like that. It could be smaller
link |
constraints. We think of space time as fundamental. Our human brains are really used to this idea of
link |
a three dimensional world with time. Right. Maybe. But our tools could go beyond that.
link |
They wouldn't have that limitation necessary. They could think in 11 dimensions, 12 dimensions,
link |
whatever is needed. But we could still maybe understand that in several different ways.
link |
The example I always give is when I play Gary Kasparov at Speed Chess or we've talked about
link |
chess and these kind of things, if you're reasonably good at chess, you can't come up with
link |
the move Gary comes up with in his move, but he can explain it to you. And you can understand.
link |
And you can understand post hoc the reasoning. So I think there's an even further level of like,
link |
well, maybe you couldn't have invented that thing, but going back to using language again,
link |
perhaps you can understand and appreciate that. Same way, you can appreciate Vivaldi or Mozart
link |
or something without, you can appreciate the beauty of that without being able to construct it
link |
yourself, right? Invent the music yourself. So I think we see this in all forms of life.
link |
So it'll be that times, you know, a million. But it would you can imagine also one sign of
link |
intelligence is the ability to explain things clearly and simply, right? You know, people like
link |
Richard Feynman, another one of my all time heroes used to say that, right? If you can't,
link |
you know, if you can explain it something simply, then that's the best sign, a complex
link |
topic simply, then that's one of the best signs of you understanding it. So I can see myself
link |
talking trash in the AI system in that way. It gets frustrated how dumb I am in trying to explain
link |
something to me. I was like, well, that means you're not intelligent, because if you were intelligent,
link |
you'd be able to explain it simply. Yeah, of course, as you know, there's also the other
link |
option, of course, we could enhance ourselves and with our devices, we are already sort of
link |
symbiotic with our compute devices, right? With our phones and other things. And, you know,
link |
there's stuff like neural link and accepture that could be could could advance that further.
link |
So I think there's lots of lots of really amazing possibilities that I could foresee from here.
link |
Well, let me ask you some wild questions. So out there, looking for friends,
link |
do you think there's a lot of alien civilizations out there?
link |
So I guess this also goes back to your origin of life question too, because I think that that's key.
link |
My personal opinion, looking at all this and, you know, it's one of my hobbies, physics, I guess.
link |
So I, you know, it's something I think about a lot and talk to a lot of experts on and read a
link |
lot of books on. And I think my feeling currently is that we are alone. I think that's the most
link |
likely scenario given what evidence we have. So, and the reasoning is, I think that, you know,
link |
we've tried since things like SETI program, and I guess since the dawning of the space age,
link |
we've, you know, had telescopes, open radio telescopes and other things. And if you think about
link |
and try to detect signals, now, if you think about the evolution of humans on Earth,
link |
we could have easily been a million years ahead of our time now, or million years behind,
link |
right easily, with just some slightly different quirk thing happening hundreds of thousands years
link |
ago, you know, things could have been slightly different. If the meteor would hit the dinosaurs
link |
a million years earlier, maybe things would have evolved, we'd be a million years ahead of where
link |
we are now. So what that means is, if you imagine where humanity will be in a few hundred years,
link |
let alone a million years, especially if we hopefully, you know, solve things like climate
link |
change and other things, and we continue to flourish, and we build things like AI, and we
link |
do space traveling, and all of the stuff that humans have dreamed of forever, right, and sci fi
link |
has talked about forever. We will be spreading across the stars, right, and von Neumann famously
link |
calculated, you know, it would only take about a million years if you send out von Neumann probes
link |
to the nearest, you know, the nearest other solar systems, and then they built, all they did was
link |
built two more versions of themselves and set those two out to the next nearest systems.
link |
You, you know, within a million years, I think you would have one of these probes in every system
link |
in the galaxy. So it's not actually in cosmological time, that's actually a very short amount of
link |
time. So, and, you know, we people like Dyson have thought about constructing Dyson spheres around
link |
stars to collect all the energy coming out of the star, you know, that there would be constructions
link |
like that would be visible across space, probably even across a galaxy. So, and then, you know, if
link |
you think about all of our radio, television, emissions that have gone out since, since the,
link |
you know, 30s and 40s, imagine a million years of that, and now hundreds of civilizations doing
link |
that. When we opened our ears, at the point we got technologically sophisticated enough in the
link |
space age, we should have heard a cacophony of voices. We should have joined that cacophony of
link |
voices. And what, what we did, we open our ears and we heard nothing. And many people who argue
link |
that there are aliens would say, well, we haven't really done exhaustive search yet. And maybe we're
link |
looking in the wrong bands and, and we've got the wrong devices and we wouldn't notice what an alien
link |
form was like to be so different to what we're used to. But, you know, I don't really buy that,
link |
that it shouldn't be as difficult as that. Like we, I think we've searched enough.
link |
There should be everywhere.
link |
If it was, it should be everywhere. We should see Dyson spheres being put up,
link |
suns blinking in and out. You know, there should be a lot of evidence for those things.
link |
And then there are other people who argue, well, the sort of safari view of like, well, we're a
link |
primitive species still because we're not space faring yet. And, and, and we're, you know, there's
link |
some kind of globe, like universal rule not to interfere your Star Trek rule. But like, look,
link |
we can't even coordinate humans to deal with climate change. And we're one species. What,
link |
what is the chance that of all of these different human civilization, you know,
link |
alien civilizations, they would have the same priorities and, and, and agree or cross the,
link |
you know, these kind of matters. And even if that was true, and we were in some sort of safari
link |
for our own good, to me, that's not much different from the simulation hypothesis. Because what does
link |
it mean, the simulation hypothesis? I think in its most fundamental level, it means what we're
link |
seeing is not quite reality, right? It's something, there's something more deeper underlying it,
link |
maybe computational. Now, if we were in a, if we were in a sort of safari park, and everything we
link |
were seeing was a hologram, and it was projected by the aliens or whatever, that to me is not much
link |
different than thinking we're inside of another universe, because we still can't see true reality,
link |
right? I mean, there's, there's other explanations. It could be that
link |
that the way they're communicating is just fundamentally different, that we're too dumb to
link |
understand the much better methods of communication they have. It could be, I mean, I mean, it's
link |
silly to say, but our own thoughts could be the methods by which they're communicating. Like,
link |
the place from which our ideas, writers talk about this, like the muse. Yeah.
link |
I mean, it sounds like very kind of wild, but it could be thoughts, it could be some interactions
link |
with our mind that we think are originating from us is actually something that is coming from other
link |
life forms elsewhere. Consciousness itself might be that. It could be, but I don't see any sensible
link |
argument to the why, why would all of the alien species behave in this way? Yeah, some of them
link |
will be more primitive, they will be close to our level. You know, there would, there should be a
link |
whole sort of normal distribution of these things, right? Some would be aggressive, some would be,
link |
but, you know, curious, others would be very stoical and philosophical, because, you know,
link |
maybe they're a million years older than us. But it's not, it shouldn't be like, I mean, one,
link |
one alien civilization might be like that, communicating thoughts and others, but I don't
link |
see why, you know, potentially the hundreds there should be would be uniform in this way, right?
link |
It could be a violent dictatorship that the people, the alien civilizations that
link |
that become successful, become gain the ability to be destructive in order of magnitude more
link |
destructive. But of course, the sad thought, well, either humans are very special. We took a lot of
link |
leaps that arrived at what it means to be human. There's a question there, which was the hardest,
link |
which was the most special. But also, if others have reached this level, and maybe many others
link |
have reached this level, the great filter that prevented them from going farther to becoming
link |
a multiplayer species are reaching out into the stars. And those are really important questions
link |
for us, whether, whether there's other alien civilizations out there or not, this is very
link |
useful for us to think about. If we destroy ourselves, how will we do it? And how easy is it to do?
link |
Yeah. Well, you know, these are big questions. And I've thought about these a lot. But the
link |
interesting thing is that if we're, if we're alone, that's somewhat comforting from the great filter
link |
perspective, because it probably means the great filters were passed us. And I'm pretty sure they
link |
are. So going back to your origin of life question, there are some incredible things that no one
link |
knows how happened. Like obviously, the first life form from chemical soup, that seems pretty hard.
link |
But I would guess the multicellular, I wouldn't be that surprised if we saw single cell sort of
link |
life forms elsewhere, bacteria type things. But multicellular life seems incredibly hard,
link |
that step of, you know, capturing mitochondria and then sort of using that as part of yourself,
link |
you know, when you've just eaten it. Would you say that's the biggest, the most,
link |
like, if you had to choose one, sort of Hitchhiker's Galaxy, one sentence summary of like, oh,
link |
those clever creatures did this, there would be the multicellular.
link |
I think that was probably the one that's the biggest. I mean, there's a great book called
link |
The 10 Great Inventions of Evolution by Nick Lane, and he speculates on 10, 10 of these,
link |
you know, what could be great filters. I think that's one, I think the, the advent of, of, of
link |
intelligence and, and conscious intelligence and in order, you know, to us to be able to do science
link |
and things like that is huge as well. I mean, there's only evolved once as far as, you know,
link |
in, in, in Earth history. So that would be a later candidate. But there's certainly for the
link |
early candidates, I think multicellular life forms is huge.
link |
By the way, what it's interesting to ask you, if you can hypothesize about what is the origin of
link |
intelligence? Is it that we started cooking meat over fire? Is it that we somehow figured out that
link |
we could be very powerful when we started collaborating? So cooperation between our ancestors
link |
so that we can overthrow the alpha male? What is it, Richard? I talked to Richard
link |
Randham, who thinks we're all just beta males who figured out how to collaborate to defeat
link |
the one, the dictator, the authoritarian alpha male that controlled the tribe.
link |
Is there other explanation? Was there a 2001 space obviously type of monolith that came down to
link |
Earth? Well, I think, I think all of those things you suggested are good candidates, fire and, and,
link |
and cooking, right? So that's clearly important for energy, you know, energy efficiency,
link |
cooking our meat and then, and then being able to, to, to be more efficient about eating it and
link |
getting, consuming the energy. I think that's huge. And then utilizing fire and tools. I think
link |
you're right about the tribal cooperation aspects and probably language as part of that.
link |
Because probably that's what allowed us to outcompete Neanderthals and perhaps less cooperative
link |
species. So, so that may be the case. Toolmaking, spears, axes, I think that let us, I mean,
link |
I think it's pretty clear now that humans were responsible for a lot of the extinctions of
link |
megafauna, especially in, in, in the Americas when humans arrived. So you can imagine, once you
link |
discover tool usage, how powerful that would have been and how scary for animals. So I think all of
link |
those could have been explanations for it. You know, the interesting thing is that it's a bit
link |
like general intelligence too, is it's very costly to begin with, to have a brain, and especially
link |
a general purpose brain rather than a special purpose one, because you have energy our brains
link |
use. I think it's like 20% of the body's energy. And it's, it's massive. And when you're thinking
link |
chess, one of the funny things that, that we used to say is it's as much as a racing driver uses
link |
for a whole, you know, Formula One race, just playing a game of, you know, serious high level
link |
chess, which we know you wouldn't think just sitting there, because the brain's using so much
link |
energy. So in order for an animal and organism to justify that, there has to be a huge payoff.
link |
And the problem with, with half a brain, or half, you know, intelligence, say an IQs of, you know,
link |
of like a monkey brain, it's, it's not clear you can justify that evolutionary until you get to
link |
the human level brain. And so, but how do you, how do you do that jump? It's very difficult,
link |
which is why I think it's only been done once from the sort of specialized brains that you see
link |
in animals, to this sort of general purpose, chewing powerful brains that humans have.
link |
And, which allows us to invent the modern, modern world. And, you know, it takes a lot to, to cross
link |
that barrier. And I think we've seen the same with AI systems, which is that maybe until very
link |
recently, it's always been easier to craft a specific solution to a problem like chess,
link |
than it has been to build a general learning system that could potentially do many things.
link |
Because initially, that system will be way worse than less efficient than the specialized system.
link |
So one of the interesting quirks of the human mind of this evolved system is that it appears to be
link |
conscious. This thing that we don't quite understand, but it seems very, very special,
link |
its ability to have a subjective experience that it feels like something to eat a cookie,
link |
the deliciousness of it, or see a color and that kind of stuff. Do you think in order to solve
link |
intelligence, we also need to solve consciousness along the way? Do you think AI systems need to
link |
have consciousness in order to be truly intelligent?
link |
Yeah, we thought about this a lot actually. And I think that my guess is that consciousness and
link |
intelligence are double dissociable. So you can have one without the other both ways. And I think
link |
you can see that with consciousness in that, I think some animals, pets, if you have a pet dog,
link |
or something like that, you can see some of the higher animals and dolphins, things like that,
link |
have self awareness and are very sociable, seem to dream. A lot of the traits one would regard
link |
as being kind of conscious and self aware. But yet they're not that smart, right? So they're
link |
not that intelligent by, say, IQ standards or something like that.
link |
Yeah, it's also possible that our understanding of intelligence is flawed,
link |
like putting an IQ to it. Maybe the thing that a dog can do is actually go on a very far along
link |
the path of intelligence and we humans are just able to play chess and maybe write poems.
link |
Right. But if we go back to the idea of AGI and general intelligence, dogs are very specialized,
link |
right? Most animals are pretty specialized. They can be amazing at what they do,
link |
but they're like kind of elite sports people or something, right? So they do one thing
link |
extremely well because their entire brain is optimized.
link |
They have somehow convinced the entirety of the human population to feed them and service them.
link |
So in some way, they're controlling. Yes, exactly. Well, we co evolved to some crazy
link |
degree, right? Including the way the dogs, you know, even wag their tails and twitch their
link |
noses, right? We find it inexorably cute. But I think you can also see intelligence on the other
link |
side. So systems like artificial systems that are amazingly smart at certain things like maybe
link |
playing go in chess and other things. But they don't feel at all in any shape or form conscious
link |
in the way that you do to me or I do to you. And I think actually building AI is these intelligent
link |
constructs is one of the best ways to explore the mystery of consciousness to break it down.
link |
Because we're going to have devices that are pretty smart at certain things or capable at
link |
certain things, but potentially won't have any semblance of self awareness or other things.
link |
And in fact, I would advocate if there's a choice, building systems in the first place,
link |
AI systems that are not conscious to begin with are just tools until we understand them better
link |
and the capabilities better. So on that topic, just not as the CEO of DeepMind,
link |
just as a human being, let me ask you about this one particular anecdotal evidence of the Google
link |
engineer who made a comment or believed that there's some aspect of a language model,
link |
the Lambda language model that exhibited sentience. So you said you believe there might be a
link |
responsibility to build systems that are not sentient. And this experience of a particular
link |
engineer, I think I'd love to get your general opinion on this kind of thing, but I think it
link |
will happen more and more and more, which not one engineers, but when people out there that
link |
don't have an engineer background start interacting with increasingly intelligent systems, we
link |
anthropomorphize them, they start to have deep impactful interactions with us in a way that
link |
we miss them when they're gone. And we sure as heck feel like they're living entities,
link |
self aware entities, and maybe even we project sentience onto them. So what's your thought about
link |
this particular system? Have you ever met a language model that's sentient?
link |
No. And what do you make of the case of when you feel that there's some elements of sentience to
link |
this system? Yeah, so this is an interesting question and obviously a very fundamental one.
link |
So the first thing to say is I think that none of the systems we have today, I would say even have
link |
one iota of semblance of consciousness or sentience, that's my personal feeling interacting with them
link |
every day. So I think this way premature to be discussing what that engineer talked about.
link |
I think at the moment, it's more of a projection of other way our own minds work, which is to see
link |
sort of purpose and direction in almost anything that we, our brains are trained to interpret
link |
agency basically in things, even inanimate things sometimes. And of course, with a language system
link |
because language is so fundamental to intelligence that's going to be easy for us to anthropomorphize
link |
that. I mean, back in the day, even the first, you know, the dumbest sort of template chatbots ever,
link |
Eliza and the ilk of the original chatbots back in the 60s fooled some people under certain
link |
circumstances, right, it pretended to be a psychologist. So we just basically wrap it back to you the
link |
same question you asked it back to you. And some people believe that. So I don't think we can,
link |
this is why I think the truing test is a little bit flawed as a formal test because it depends on
link |
the sophistication of the of the judge, whether or not they are qualified to make that distinction.
link |
So I think we should talk to, you know, the top philosophers about this people like Daniel Dennett
link |
and David Chalmers and others who've obviously thought deeply about consciousness. Of course,
link |
consciousness itself hasn't been well, there's no agreed definition. If I was to, you know,
link |
speculate about that, you know, I kind of the definite the working definition I like is,
link |
it's the way information feels when, you know, it gets processed, I think maybe Max Tegmark
link |
came up with that. I like that idea. I don't know if it helps us get towards any more operational
link |
thing. But it's, I think it's a nice way of viewing it. I think we can obviously see from
link |
neuroscience certain prerequisites that require like self awareness, I think is necessary,
link |
but not sufficient component, this idea of a self and other and set of coherent preferences
link |
that are coherent over time. You know, these things are maybe memory. These things are probably
link |
needed for a sentient or conscious being. But the reason that the difficult thing I think for
link |
us when we get, and I think this is a really interesting philosophical debate, is when we get
link |
closer to AGI and, you know, and much more powerful systems than we have today, how are we going to
link |
make this judgment? And one way, which is the Turing test is sort of a behavioral judgment,
link |
is the system exhibiting all the behaviors that a human sentient or sentient being would exhibit?
link |
Is it answering the right questions? Is it saying the right things? Is it indistinguishable from a
link |
human? And so on. But I think there's a second thing that makes us as humans regard each other
link |
as sentient, right? Why do we, why do we think this? And I debated this with Daniel Dennett.
link |
And I think there's a second reason that's often overlooked, which is that we're running on the
link |
same substrate, right? So if we're exhibiting the same behavior, more or less as humans,
link |
and we're running on the same, you know, carbon based biological substrate, the squishy, you know,
link |
a few pounds of flesh in our skulls, then the most parsimonious, I think, explanation is that
link |
you're feeling the same thing as I'm feeling, right? But we will never have that second part,
link |
the substrate equivalence with a machine, right? So we will have to only judge based on the behavior.
link |
And I think the substrate equivalence is a critical part of why we make assumptions that
link |
we're conscious. And in fact, even with animals, high level animals, why we think they might be,
link |
because they're exhibiting some of the behaviors we would expect from a sentient animal. And we
link |
know they're made of the same things, biological neurons. So we're going to have to come up with
link |
explanations or models of the gap between substrate differences between machines and humans
link |
to get anywhere beyond the behavioral. But to me, sort of the practical question is
link |
very interesting and very important. When you have millions, perhaps billions of people believing
link |
that you have a sentient AI, believing what that Google engineer believed, which I just see as an
link |
obvious, very near term future thing, certainly on the path to AGI, how does that change the world?
link |
What's the responsibility of the AI system to help those millions of people?
link |
And also, what's the ethical thing? Because you can make a lot of people happy by creating a meaningful,
link |
deep experience with a system that's faking it before it makes it. Who is to say what's the
link |
right thing to do? Should AI always be tools? Why are we constraining AI to always be tools as opposed
link |
to friends? I think, well, these are fantastic questions and also critical ones. And we've
link |
been thinking about this since the start of DeepMind and before that, because we planned for success
link |
and have a remote that looked like back in 2010. And we've always had sort of these ethical
link |
considerations as fundamental at DeepMind. And my current thinking on the language models and
link |
large models is they're not ready. We don't understand them well enough yet. And in terms
link |
of analysis tools and guardrails, what they can and can't do and so on to deploy them at scale,
link |
because I think there are big, still ethical questions like, should an AI system always
link |
announce that it is an AI system to begin with? Probably yes. What do you do about answering
link |
those philosophical questions about the feelings people may have about AI systems,
link |
perhaps incorrectly attributed? So I think there's a whole bunch of research that needs to be done
link |
first. You can responsibly deploy these systems at scale. That will at least be my
link |
current position. Over time, I'm very confident we'll have those tools, like interpretability
link |
questions and analysis questions. And then with the ethical quandary, I think there,
link |
it's important to look beyond just science. That's why I think philosophy, social sciences,
link |
even theology, other things like that come into it, where arts and humanities, what does it mean
link |
to be human and the spirit of being human and to enhance that and the human condition and allow
link |
us to experience things we could never experience before and improve the overall human condition
link |
and humanity overall, get radical abundance, solve many scientific problems, solve disease.
link |
So this is the era I think, this is the amazing era I think we're heading into if we do it right.
link |
But we've got to be careful. We've already seen with things like social media how dual use
link |
technologies can be misused by firstly by bad actors or naive actors or crazy actors. So that's
link |
that set of just the common or garden misuse of existing dual use technology. And then of course,
link |
there's an additional thing that has to be overcome with AI that eventually it may have its own
link |
agency. So it could be good or bad in itself. So I think these questions have to be approached
link |
very carefully using the scientific method, I would say, in terms of hypothesis generation,
link |
careful control testing, not live AB testing out in the world. Because with powerful dual
link |
technologies like AI, if something goes wrong, it may cause a lot of harm before you can fix it.
link |
It's not like an imaging app or game app where if something goes wrong, it's relatively easy to
link |
fix and the harm is relatively small. So I think it comes with the usual cliche of like with a lot
link |
of power comes a lot of responsibility. And I think that's the case here with things like AI given
link |
the enormous opportunity in front of us. And I think we need a lot of voices and as many inputs
link |
into things like the design of the systems and the values they should have and what goals should
link |
they be put to, I think as wide a group of voices as possible beyond just the technologist is needed
link |
to input into that and to have a say in that, especially when it comes to deployment of these
link |
systems, which is when the rubber really hits the road, it really affects the general person
link |
in the street rather than fundamental research. And that's why I say, I think as a first step,
link |
it would be better if we have the choice to build these systems as tools to give and I'm not saying
link |
that they should never go beyond tools because of course the potential is there for it to go
link |
way beyond just tools. But I think that would be a good first step in order for us to allow us to
link |
carefully experiment and understand what these things can do. So the leap between tool,
link |
the sentient entity being as well should take very careful. Yes. Let me ask a dark personal
link |
question. So you're one of the most brilliant people in the AI community. You're also one of the
link |
most kind and if I may say sort of loved people in the community, that said, creation of a super
link |
intelligent AI system would be one of the most powerful things in the world, tools or otherwise.
link |
And again, as the old saying goes, power corrupts and absolute power corrupts, absolutely.
link |
You are likely to be one of the people, I would say probably the most likely person to be in the
link |
control of such a system. Do you think about the corrupting nature of power when you talk about
link |
these kinds of systems that as all dictators and people have caused atrocities in the past,
link |
always think they're doing good, but they don't do good because the power has polluted their mind
link |
about what is good and what is evil. Do you think about this stuff or we just focus on language
link |
model? No, I think about them all the time. And I think what are the defenses against that? I think
link |
one thing is to remain very grounded and sort of humble no matter what you do or achieve.
link |
And I try to do that. My best friends are still my set of friends from my undergraduate Cambridge
link |
days. My families and friends are very important. I've always, I think trying to be a multidisciplinary
link |
person, it helps to keep you humble because no matter how good you are at one topic,
link |
someone will be better than you at that. And always relearning a new topic again from scratch
link |
is or new field is very humbling. So for me, that's been biology over the last five years.
link |
Huge area topic and I just love doing that, but it helps to keep you grounded and keeps you open
link |
minded. And then the other important thing is to have a really group, amazing set of
link |
people around you at your company or your organization who are also very ethical and
link |
grounded themselves and help to keep you that way. And then ultimately, just to answer your
link |
question, I hope we're going to be a big part of birthing AI and that being the greatest benefit
link |
to humanity of any tool or technology ever and getting us into a world of radical abundance and
link |
curing diseases and solving many of the big challenges we have in front of us and then
link |
ultimately help the ultimate flourishing of humanity to travel the stars and find those
link |
aliens if they are there. And if they're not there, find out why they're not there, what is
link |
going on here in the universe. This is all to come and that's what I've always dreamed about.
link |
But I think AI is too big an idea. There'll be a certain set of pioneers who get there first.
link |
I hope we're in the vanguard so we can influence how that goes. And I think it matters which cultures
link |
they come from and what values they have, the builders of AI systems. Because I think even
link |
though the AI system is going to learn for itself, most of its knowledge, there'll be a residue in
link |
the system of the culture and the values of the creators of that system. And there's interesting
link |
questions to discuss about that geopolitically, different cultures as we're in a more fragmented
link |
world than ever. Unfortunately, I think in terms of global cooperation, we see that in things
link |
like climate where we can't seem to get our act together globally to cooperate on these pressing
link |
matters. I hope that will change over time. Perhaps if we get to an era of radical abundance,
link |
we don't have to be so competitive anymore. Maybe we can be more correct cooperative
link |
if resources aren't so scarce. It's true that in terms of power corrupting and leading to
link |
destructive things, it seems that some of the atrocities of the past happen when there's a
link |
significant constraint on resources. I think that's the first thing. I don't think that's enough.
link |
I think scarcity is one thing that's led to competition, zero sum game thinking. I would
link |
like us to all be in a positive sum world. And I think for that, you have to remove scarcity.
link |
I don't think that's enough, unfortunately, to get well peace because there's also other
link |
corrupting things like wanting power over people and this kind of stuff, which is not
link |
necessarily satisfied by just abundance. But I think it will help. But I think ultimately,
link |
AI is not going to be run by any one person, one organization. I think it should belong
link |
to the world, belong to humanity. And I think there'll be many ways this will happen. And
link |
ultimately, everybody should have a say in that.
link |
Do you have advice for young people in high school and college? Maybe if they're interested in
link |
AI or interested in having a big impact on the world, what they should do to have a career
link |
they can be proud of or to have a life they can be proud of?
link |
So I love giving talks to the next generation. What I say to them is actually two things. I think
link |
the most important things to learn about and to find out about when you're young is what are
link |
your true passions is, first of all, as two things. One is find your true passions. And I think
link |
you can do that by the way to do that is to explore as many things as possible when you're young and
link |
you have the time and you can take those risks. I would also encourage people to look at the
link |
finding the connections between things in a unique way. I think that's a really great way
link |
to find a passion. Second thing I would say advice is know yourself. So spend a lot of time
link |
understanding how you work best, like what are the optimal times to work? What are the optimal
link |
ways that you study? How do you deal with pressure? Sort of test yourself in various scenarios and
link |
try and improve your weaknesses, but also find out what your unique skills and strengths are
link |
and then hone those. So then that's what will be your super value in the world later on. And if you
link |
can then combine those two things and find passions that you're genuinely excited about that intersect
link |
with what your unique strong skills are, then you're onto something incredible and I think
link |
you can make a huge difference in the world. So let me ask about know yourself. This is fun.
link |
This is fun. Quick questions about day in the life, the perfect day, the perfect productive day in
link |
the life of Demesis Hub. Maybe these days, there's a lot involved. So maybe a slightly younger
link |
Demesis Hub where you could focus on a single project maybe. How early do you wake up? Are you
link |
night owl? Do you wake up early in the morning? What are some interesting habits? How many
link |
dozens of cups of coffees do you drink a day? What's the computer that you use? What's the setup?
link |
How many screens? What kind of keyboard are we talking? Emax Vim or we're talking something
link |
more modern. There's a bunch of those questions. So maybe day in the life, what's the perfect day
link |
involved? Well, these days, it's quite different from say 10, 20 years ago. Back 10, 20 years ago,
link |
it would have been a whole day of research, individual research or programming, doing some
link |
experiment, neuroscience, computer science experiment, reading lots of research papers,
link |
and then perhaps at night time, reading science fiction books or playing some games.
link |
But lots of focus, deep focused work on whether it's programming or reading research papers.
link |
Yes. So that would be lots of deep, focused work. These days, for the last sort of, I guess,
link |
five to 10 years, I've actually got quite a structure that works very well for me now,
link |
which is that I'm a complete night owl, always have been. So I optimize for that. So I basically
link |
do a normal day's work, get into work about 11 o clock and sort of do work to about seven
link |
in the office. And I will arrange back to back meetings for the entire time of that.
link |
And with as many, me as many people as possible. So that's my collaboration, management part of the
link |
day. Then I go home, spend time with the family and friends, have dinner, relax a little bit.
link |
And then I start a second day of work, I call it my second day of work around 10pm, 11pm.
link |
And that's the time till about the small hours of the morning, four, five in the morning,
link |
where I will do my thinking and reading research, writing research papers. Sadly,
link |
don't have time to code anymore. But it's not efficient to do that these days,
link |
given the amount of time I have. But that's when I do, maybe do the long kind of stretches of
link |
thinking and planning. And then probably using email or other things, I would fire off a lot
link |
of things to my team to deal with the next morning. But actually, thinking about this overnight,
link |
we should go for this project or arrange this meeting the next day.
link |
When you think it through a problem, like talking about sheet of paper, is there some
link |
structured process?
link |
I still like pencil and paper best for working out things. But these days, it's just so
link |
efficient to read research papers just on the screen. I still often print them out,
link |
actually. I still prefer to mark out things. And I find it goes into the brain quick better
link |
and sticks in the brain better when you're still using physical pen and pencil and paper.
link |
So you take notes?
link |
I have lots of notes, electronic ones and also whole stacks of notebooks that I use at home.
link |
On some of these most challenging next steps, for example, stuff none of us know about that
link |
you're working on, you're thinking, there's some deep thinking required there. What is the
link |
right problem? What is the right approach? Because you're going to have to invest a huge
link |
amount of time for the whole team. They're going to have to pursue this thing. What's
link |
the right way to do it? Is RL going to work here or not?
link |
What's the right thing to try? What's the right benchmark to you? Do we need to construct a
link |
benchmark from scratch? All those kinds of things.
link |
Yes. So I think of all those kind of things in the night time phase, but also much more.
link |
I find I've always found the quiet hours of the morning when everyone's asleep. It's super quiet
link |
outside. I love that time. It's the golden hours like between like one and three in the morning.
link |
Put some music on, some inspiring music on and then think these deep thoughts. So that's when
link |
I would read my philosophy books and Spinoza's, my recent favorite can all these things. I read
link |
about a great scientist of history, how they did things, how they thought things. So that's
link |
when I do all my creative thinking. It's good. I think people recommend you do your sort of
link |
creative thinking in one block. The way I organize the day, that way I don't get interrupted because
link |
obviously no one else is up at those times. So I can sort of get super deep and super into flow.
link |
The other nice thing about doing it night time wise is if I'm really onto something or I've got
link |
really deep into something, I can choose to extend it and I'll go into six in the morning,
link |
whatever, and then I'll just pay for it the next day because I'll be a bit tired and I won't be
link |
my best. But that's fine. I can decide looking at my schedule the next day and given where I'm at
link |
with this particular thought or creative idea that I'm going to pay that cost the next day.
link |
So I think that's more flexible than morning people who do that. They get up at four in the
link |
morning. They can also do those golden hours then. But then their start of their scheduled
link |
day starts at breakfast, AAM, whatever they have their first meeting. And then it's hard,
link |
you have to reschedule a day if you're in flow. So I don't have to do that.
link |
Yeah, that could be a truly special thread of thoughts that you're too passionate about.
link |
This is where some of the greatest ideas could potentially come is when you just
link |
lose yourself late into the night. And for the meetings, I mean, you're loading in really hard
link |
problems in a very short amount of time. So you have to do some kind of first principles thinking
link |
here. It's like, what's the problem? What's the state of things? What's the right next step?
link |
Yes, you have to get really good at context switching, which is one of the hardest things
link |
because especially as we do so many things, if you include all the scientific things we do,
link |
scientific fields we're working in, these are entire complex fields in themselves. And you
link |
have to sort of keep up to a rest of that. But I enjoy it. I've always been a sort of generalist
link |
in a way. And that's actually what happened with my games career after chess. One of the reasons
link |
I stopped playing chess was because I got into computers, but also I started realizing there
link |
were many other great games out there to play too. So I've always been that way inclined,
link |
multidisciplinary. And there's too many interesting things in the world to spend all your time just
link |
on one thing. So you mentioned Spinoza got asked the big, ridiculously big question about life.
link |
What do you think is the meaning of this whole thing? Why are we humans here? You've already
link |
mentioned that perhaps the universe created us. Is that why you think we're here to understand
link |
how the universe? Yeah, I think my answer to that would be, and at least the life I'm living,
link |
is to gain and understand the knowledge, to gain knowledge and understand the universe.
link |
That's what I think. I can't see any higher purpose than that. If you think back to the
link |
classical Greeks, the virtue of gaining knowledge, I think it's one of the few true virtues is to
link |
understand the world around us and the context and humanity better. And I think if you do that,
link |
you become more compassionate and more understanding yourself and more tolerant. And all these,
link |
I think all these other things may flow from that. And to me, understanding the nature of reality,
link |
that is the biggest question. What is going on here is sometimes the colloquial way I say.
link |
What is really going on here? It's so mysterious. I feel like we're in some huge puzzle. But the
link |
world also seems to be, the universe seems to be structured in a way. Why is it structured
link |
in a way that science is even possible? The scientific method works, things are repeatable.
link |
It feels like it's almost structured in a way to be conducive to gaining knowledge.
link |
Why should computers be even possible? Isn't that amazing that computational or electronic
link |
devices can be possible? And they're made of sand, our most common element that we have,
link |
silicon on the Earth's crust, that could be made of diamond or something,
link |
that we would have only had one computer. So a lot of things are slightly suspicious to me.
link |
It sure as heck sounds, this puzzle sure as heck sounds like something we talked about earlier,
link |
what it takes to design a game that's really fun to play for prolonged periods of time.
link |
And it does seem like this puzzle, like you mentioned, the more you learn about it,
link |
the more you realize how little you know. So it humbles you, but excites you by the possibility
link |
of learning more. It's one heck of a, one heck of a puzzle we got going on here. So like I mentioned,
link |
of all the people in the world, you're very likely to be the one who creates the AGI system
link |
that achieves human level intelligence and goes beyond it. So if you got a chance and very well,
link |
you could be the person that goes into the room with the system and have a conversation.
link |
Maybe you only get to ask one question. If you do, what question would you ask her?
link |
I would probably ask, what is the true nature of reality? I think that's the question. I don't
link |
know if I'd understand the answer because maybe it would be 42 or something like that. But that's
link |
the question I would ask. And then there'll be a deep sigh from the systems like, all right,
link |
how do I explain to this human? Exactly. All right, let me, I don't have time to explain. Maybe
link |
I'll draw you a picture. It is, I mean, how do you even begin to answer that question?
link |
Well, I think it would... What would you think the answer could possibly look like?
link |
I think it could start looking like more fundamental explanations of physics would be the
link |
beginning, more careful specification of that, taking you, walking us through by the hand as to
link |
what one would do to maybe prove those things out. Maybe giving you glimpses of what things you
link |
totally missed in the physics of today. Exactly. Here's glimpses of, no, like there's a much
link |
more elaborate world or a much simpler world or something.
link |
A much deeper, maybe simpler explanation of things, right, than the standard model of physics,
link |
which we know doesn't work, but we still keep adding to. And that's how I think the beginning
link |
of an explanation would look. And it would start encompassing many of the mysteries that we have
link |
wondered about for thousands of years, like consciousness, dreaming, life, and gravity,
link |
all of these things. Yeah, giving us glimpses of explanations for those things. Yeah. Well,
link |
Dennis, you're one of the special human beings in this giant puzzle of ours. And it's a huge
link |
honor that you would take a pause from the bigger puzzle to solve this small puzzle of a
link |
conversation with me today. It's truly an honor and a pleasure. Thank you so much.
link |
Thank you for having me. I really enjoyed it. Thanks, Lex. Thanks for listening to this conversation
link |
with Dennis Lasabas. To support this podcast, please check out our sponsors in the description.
link |
And now, let me leave you with some words from Ezker Dijkstra. Computer science is no more about
link |
than astronomy is about telescopes. Thank you for listening and hope to see you next time.