back to index

Risto Miikkulainen: Neuroevolution and Evolutionary Computation | Lex Fridman Podcast #177


small model | large model

link |
00:00:00.000
The following is a conversation with Risto Michaelainen,
link |
00:00:02.860
a computer scientist at University of Texas at Austin
link |
00:00:05.980
and Associate Vice President
link |
00:00:07.860
of Evolutionary Artificial Intelligence at Cognizant.
link |
00:00:11.460
He specializes in evolutionary computation,
link |
00:00:14.420
but also many other topics in artificial intelligence,
link |
00:00:17.620
cognitive science, and neuroscience.
link |
00:00:19.900
Quick mention of our sponsors,
link |
00:00:21.900
Jordan Harbin's show, Grammarly, Belcampo, and Indeed.
link |
00:00:26.600
Check them out in the description to support this podcast.
link |
00:00:30.580
As a side note, let me say that nature inspired algorithms
link |
00:00:34.140
from ant colony optimization to genetic algorithms
link |
00:00:36.820
to cellular automata to neural networks
link |
00:00:39.580
have always captivated my imagination,
link |
00:00:41.900
not only for their surprising power
link |
00:00:43.940
in the face of long odds,
link |
00:00:45.580
but because they always opened up doors
link |
00:00:47.780
to new ways of thinking about computation.
link |
00:00:50.700
It does seem that in the long arc of computing history,
link |
00:00:54.180
running toward biology, not running away from it
link |
00:00:57.560
is what leads to long term progress.
link |
00:01:00.420
This is the Lex Friedman podcast,
link |
00:01:03.220
and here is my conversation with Risto Michaelainen.
link |
00:01:07.720
If we ran the Earth experiment,
link |
00:01:10.200
this fun little experiment we're on,
link |
00:01:12.500
over and over and over and over a million times
link |
00:01:15.220
and watch the evolution of life as it pans out,
link |
00:01:19.180
how much variation in the outcomes of that evolution
link |
00:01:21.940
do you think we would see?
link |
00:01:23.180
Now, we should say that you are a computer scientist.
link |
00:01:27.380
That's actually not such a bad question
link |
00:01:29.380
for a computer scientist,
link |
00:01:30.380
because we are building simulations of these things,
link |
00:01:34.020
and we are simulating evolution,
link |
00:01:36.220
and that's a difficult question to answer in biology,
link |
00:01:38.460
but we can build a computational model
link |
00:01:40.700
and run it million times and actually answer that question.
link |
00:01:43.540
How much variation do we see when we simulate it?
link |
00:01:47.000
And that's a little bit beyond what we can do today,
link |
00:01:50.620
but I think that we will see some regularities,
link |
00:01:54.140
and it took evolution also a really long time
link |
00:01:56.540
to get started,
link |
00:01:57.720
and then things accelerated really fast towards the end.
link |
00:02:02.180
But there are things that need to be discovered,
link |
00:02:04.220
and they probably will be over and over again,
link |
00:02:06.460
like manipulation of objects,
link |
00:02:10.060
opposable thumbs,
link |
00:02:11.140
and also some way to communicate,
link |
00:02:16.020
maybe orally, like when you have speech,
link |
00:02:18.220
it might be some other kind of sounds,
link |
00:02:20.820
and decision making, but also vision.
link |
00:02:24.060
Eye has evolved many times.
link |
00:02:26.220
Various vision systems have evolved.
link |
00:02:28.180
So we would see those kinds of solutions,
link |
00:02:30.740
I believe, emerge over and over again.
link |
00:02:32.900
They may look a little different,
link |
00:02:34.260
but they get the job done.
link |
00:02:36.300
The really interesting question is,
link |
00:02:37.500
would we have primates?
link |
00:02:38.980
Would we have humans or something that resembles humans?
link |
00:02:43.620
And would that be an apex of evolution after a while?
link |
00:02:47.020
We don't know where we're going from here,
link |
00:02:48.460
but we certainly see a lot of tool use
link |
00:02:51.300
and building, constructing our environment.
link |
00:02:54.060
So I think that we will get that.
link |
00:02:56.380
We get some evolution producing,
link |
00:02:58.740
some agents that can do that,
link |
00:03:00.860
manipulate the environment and build.
link |
00:03:02.540
What do you think is special about humans?
link |
00:03:04.140
Like if you were running the simulation
link |
00:03:06.100
and you observe humans emerge,
link |
00:03:08.700
like these tool makers,
link |
00:03:09.780
they start a fire and all this stuff,
link |
00:03:11.060
start running around, building buildings,
link |
00:03:12.620
and then running for president and all those kinds of things.
link |
00:03:15.600
What would be, how would you detect that?
link |
00:03:19.180
Cause you're like really busy
link |
00:03:20.380
as the creator of this evolutionary system.
link |
00:03:23.180
So you don't have much time to observe,
link |
00:03:25.700
like detect if any cool stuff came up, right?
link |
00:03:28.940
How would you detect humans?
link |
00:03:31.260
Well, you are running the simulation.
link |
00:03:33.300
So you also put in visualization
link |
00:03:37.480
and measurement techniques there.
link |
00:03:39.660
So if you are looking for certain things like communication,
link |
00:03:44.660
you'll have detectors to find out whether that's happening,
link |
00:03:48.020
even if it's a large simulation.
link |
00:03:50.140
And I think that that's what we would do.
link |
00:03:53.520
We know roughly what we want,
link |
00:03:56.380
intelligent agents that communicate, cooperate, manipulate,
link |
00:04:01.200
and we would build detections
link |
00:04:03.180
and visualizations of those processes.
link |
00:04:05.580
Yeah, and there's a lot of,
link |
00:04:08.060
we'd have to run it many times
link |
00:04:09.540
and we have plenty of time to figure out
link |
00:04:11.940
how we detect the interesting things.
link |
00:04:13.540
But also, I think we do have to run it many times
link |
00:04:16.680
because we don't quite know what shape those will take
link |
00:04:21.140
and our detectors may not be perfect for them
link |
00:04:23.860
at the beginning.
link |
00:04:24.700
Well, that seems really difficult to build a detector
link |
00:04:27.420
of intelligent or intelligent communication.
link |
00:04:32.740
Sort of, if we take an alien perspective,
link |
00:04:35.720
observing earth, are you sure that they would be able
link |
00:04:39.280
to detect humans as the special thing?
link |
00:04:41.340
Wouldn't they be already curious about other things?
link |
00:04:43.780
There's way more insects by body mass, I think,
link |
00:04:47.060
than humans by far, and colonies.
link |
00:04:50.860
Obviously, dolphins is the most intelligent creature
link |
00:04:53.900
on earth, we all know this.
link |
00:04:55.220
So it could be the dolphins that they detect.
link |
00:04:58.380
It could be the rockets that we seem to be launching.
link |
00:05:00.860
That could be the intelligent creature they detect.
link |
00:05:03.780
It could be some other trees.
link |
00:05:06.660
Trees have been here a long time.
link |
00:05:07.960
I just learned that sharks have been here
link |
00:05:10.580
400 million years and that's longer
link |
00:05:13.260
than trees have been here.
link |
00:05:15.020
So maybe it's the sharks, they go by age.
link |
00:05:17.420
Like there's a persistent thing.
link |
00:05:19.020
Like if you survive long enough,
link |
00:05:20.820
especially through the mass extinctions,
link |
00:05:22.380
that could be the thing your detector is detecting.
link |
00:05:25.420
Humans have been here for a very short time
link |
00:05:27.900
and we're just creating a lot of pollution,
link |
00:05:30.660
but so is the other creatures.
link |
00:05:31.940
So I don't know, do you think you'd be able
link |
00:05:34.700
to detect humans?
link |
00:05:35.740
Like how would you go about detecting
link |
00:05:37.700
in the computational sense?
link |
00:05:39.160
Maybe we can leave humans behind.
link |
00:05:40.980
In the computational sense, detect interesting things.
link |
00:05:46.180
Do you basically have to have a strict objective function
link |
00:05:48.780
by which you measure the performance of a system
link |
00:05:51.860
or can you find curiosities and interesting things?
link |
00:05:55.420
Yeah, well, I think that the first measurement
link |
00:05:59.540
would be to detect how much of an effect
link |
00:06:02.300
you can have in your environment.
link |
00:06:03.620
So if you look around, we have cities
link |
00:06:06.940
and that is constructed environments.
link |
00:06:08.820
And that's where a lot of people live, most people live.
link |
00:06:11.980
So that would be a good sign of intelligence
link |
00:06:15.140
that you don't just live in an environment,
link |
00:06:17.940
but you construct it to your liking.
link |
00:06:20.260
And that's something pretty unique.
link |
00:06:21.900
I mean, there are certainly birds build nests
link |
00:06:24.260
but they don't build quite cities.
link |
00:06:25.520
Termites build mounds and ice and things like that.
link |
00:06:29.100
But the complexity of the human construction cities,
link |
00:06:32.120
I think would stand out even to an external observer.
link |
00:06:34.940
Of course, that's what a human would say.
link |
00:06:36.940
Yeah, and you know, you can certainly say
link |
00:06:39.780
that sharks are really smart
link |
00:06:41.820
because they've been around so long
link |
00:06:43.220
and they haven't destroyed their environment,
link |
00:06:45.000
which humans are about to do,
link |
00:06:46.540
which is not a very smart thing.
link |
00:06:48.860
But we'll get over it, I believe.
link |
00:06:52.000
And we can get over it by doing some construction
link |
00:06:55.220
that actually is benign
link |
00:06:56.780
and maybe even enhances the resilience of nature.
link |
00:07:02.440
So you mentioned the simulation that we run over and over
link |
00:07:05.460
might start, it's a slow start.
link |
00:07:08.900
So do you think how unlikely, first of all,
link |
00:07:12.560
I don't know if you think about this kind of stuff,
link |
00:07:14.140
but how unlikely is step number zero,
link |
00:07:18.140
which is the springing up,
link |
00:07:20.880
like the origin of life on earth?
link |
00:07:22.940
And second, how unlikely is the,
link |
00:07:27.940
anything interesting happening beyond that?
link |
00:07:30.460
So like the start that creates
link |
00:07:34.320
all the rich complexity that we see on earth today.
link |
00:07:36.700
Yeah, there are people who are working
link |
00:07:38.580
on exactly that problem from primordial soup.
link |
00:07:42.260
How do you actually get self replicating molecules?
link |
00:07:45.820
And they are very close.
link |
00:07:48.740
With a little bit of help, you can make that happen.
link |
00:07:51.900
So of course we know what we want,
link |
00:07:55.660
so they can set up the conditions
link |
00:07:57.120
and try out conditions that are conducive to that.
link |
00:08:00.780
For evolution to discover that, that took a long time.
link |
00:08:04.080
For us to recreate it probably won't take that long.
link |
00:08:07.660
And the next steps from there,
link |
00:08:10.860
I think also with some handholding,
link |
00:08:12.860
I think we can make that happen.
link |
00:08:15.920
But with evolution, what was really fascinating
link |
00:08:18.500
was eventually the runaway evolution of the brain
link |
00:08:22.620
that created humans and created,
link |
00:08:24.420
well, also other higher animals,
link |
00:08:27.220
that that was something that happened really fast.
link |
00:08:29.700
And that's a big question.
link |
00:08:32.380
Is that something replicable?
link |
00:08:33.700
Is that something that can happen?
link |
00:08:35.780
And if it happens, does it go in the same direction?
link |
00:08:39.180
That is a big question to ask.
link |
00:08:40.780
Even in computational terms,
link |
00:08:42.980
I think that it's relatively possible to come up here,
link |
00:08:47.340
create an experiment where we look at the primordial soup
link |
00:08:49.820
and the first couple of steps
link |
00:08:51.260
of multicellular organisms even.
link |
00:08:53.460
But to get something as complex as the brain,
link |
00:08:57.380
we don't quite know the conditions for that.
link |
00:08:59.660
And how do you even get started
link |
00:09:01.420
and whether we can get this kind of runaway evolution
link |
00:09:03.420
happening?
link |
00:09:05.820
From a detector perspective,
link |
00:09:09.100
if we're observing this evolution,
link |
00:09:10.780
what do you think is the brain?
link |
00:09:12.360
What do you think is the, let's say, what is intelligence?
link |
00:09:15.940
So in terms of the thing that makes humans special,
link |
00:09:18.340
we seem to be able to reason,
link |
00:09:21.060
we seem to be able to communicate.
link |
00:09:23.500
But the core of that is this something
link |
00:09:26.020
in the broad category we might call intelligence.
link |
00:09:29.620
So if you put your computer scientist hat on,
link |
00:09:33.500
is there a favorite ways you like to think about
link |
00:09:37.540
that question of what is intelligence?
link |
00:09:41.300
Well, my goal is to create agents that are intelligent.
link |
00:09:48.300
Not to define what.
link |
00:09:49.580
And that is a way of defining it.
link |
00:09:52.700
And that means that it's some kind of an object
link |
00:09:57.700
or a program that has limited sensory
link |
00:10:02.980
and effective capabilities interacting with the world.
link |
00:10:08.220
And then also a mechanism for making decisions.
link |
00:10:11.700
So with limited abilities like that, can it survive?
link |
00:10:17.220
Survival is the simplest goal,
link |
00:10:18.780
but you could also give it other goals.
link |
00:10:20.500
Can it multiply?
link |
00:10:21.380
Can it solve problems that you give it?
link |
00:10:24.420
And that is quite a bit less than human intelligence.
link |
00:10:27.220
There are, animals would be intelligent, of course,
link |
00:10:29.740
with that definition.
link |
00:10:31.100
And you might have even some other forms of life, even.
link |
00:10:35.000
So intelligence in that sense is a survival skill
link |
00:10:41.220
given resources that you have and using your resources
link |
00:10:44.580
so that you will stay around.
link |
00:10:47.860
Do you think death, mortality is fundamental to an agent?
link |
00:10:53.020
So like there's, I don't know if you're familiar,
link |
00:10:55.060
there's a philosopher named Ernest Becker
link |
00:10:56.860
who wrote The Denial of Death and his whole idea.
link |
00:11:01.220
And there's folks, psychologists, cognitive scientists
link |
00:11:04.020
that work on terror management theory.
link |
00:11:06.600
And they think that one of the special things about humans
link |
00:11:10.020
is that we're able to sort of foresee our death, right?
link |
00:11:13.940
We can realize not just as animals do,
link |
00:11:16.620
sort of constantly fear in an instinctual sense,
link |
00:11:19.420
respond to all the dangers that are out there,
link |
00:11:21.600
but like understand that this ride ends eventually.
link |
00:11:25.180
And that in itself is the force behind
link |
00:11:29.780
all of the creative efforts of human nature.
link |
00:11:32.220
That's the philosophy.
link |
00:11:33.620
I think that makes sense, a lot of sense.
link |
00:11:35.260
I mean, animals probably don't think of death the same way,
link |
00:11:38.660
but humans know that your time is limited
link |
00:11:40.660
and you wanna make it count.
link |
00:11:43.180
And you can make it count in many different ways,
link |
00:11:44.980
but I think that has a lot to do with creativity
link |
00:11:47.740
and the need for humans to do something
link |
00:11:50.060
beyond just surviving.
link |
00:11:51.720
And now going from that simple definition
link |
00:11:54.520
to something that's the next level,
link |
00:11:56.360
I think that that could be the second level of definition,
link |
00:12:00.560
that intelligence means something,
link |
00:12:03.280
that you do something that stays behind you,
link |
00:12:05.200
that's more than your existence.
link |
00:12:09.160
You create something that is useful for others,
link |
00:12:12.280
is useful in the future, not just for yourself.
link |
00:12:15.200
And I think that's the nicest definition of intelligence
link |
00:12:17.800
within a next level.
link |
00:12:19.880
And it's also nice because it doesn't require
link |
00:12:23.400
that they are humans or biological.
link |
00:12:25.160
They could be artificial agents that are intelligence.
link |
00:12:28.160
They could achieve those kind of goals.
link |
00:12:30.280
So particular agent, the ripple effects of their existence
link |
00:12:35.600
on the entirety of the system is significant.
link |
00:12:38.480
So like they leave a trace where there's like a,
link |
00:12:41.720
yeah, like ripple effects.
link |
00:12:43.840
But see, then you go back to the butterfly
link |
00:12:46.000
with the flap of a wing and then you can trace
link |
00:12:48.440
a lot of like nuclear wars
link |
00:12:50.800
and all the conflicts of human history,
link |
00:12:52.680
somehow connected to that one butterfly
link |
00:12:54.540
that created all of the chaos.
link |
00:12:56.240
So maybe that's not, maybe that's a very poetic way
link |
00:13:00.680
to think that that's something we humans
link |
00:13:03.400
in a human centric way wanna hope we have this impact.
link |
00:13:09.040
Like that is the secondary effect of our intelligence.
link |
00:13:12.160
We've had the long lasting impact on the world,
link |
00:13:14.540
but maybe the entirety of physics in the universe
link |
00:13:20.380
has a very long lasting effects.
link |
00:13:22.700
Sure, but you can also think of it.
link |
00:13:25.600
What if like the wonderful life, what if you're not here?
link |
00:13:29.980
Will somebody else do this?
link |
00:13:31.600
Is it something that you actually contributed
link |
00:13:34.560
because you had something unique to compute?
link |
00:13:36.480
That contribute, that's a pretty high bar though.
link |
00:13:39.440
Uniqueness, yeah.
link |
00:13:40.680
So, you have to be Mozart or something to actually
link |
00:13:45.080
reach that level that nobody would have developed that,
link |
00:13:47.800
but other people might have solved this equation
link |
00:13:51.800
if you didn't do it, but also within limited scope.
link |
00:13:55.920
I mean, during your lifetime or next year,
link |
00:14:00.140
you could contribute something that unique
link |
00:14:02.500
that other people did not see.
link |
00:14:04.240
And then that could change the way things move forward
link |
00:14:09.240
for a while.
link |
00:14:11.320
So, I don't think we have to be Mozart
link |
00:14:14.000
to be called intelligence,
link |
00:14:15.320
but we have this local effect that is changing.
link |
00:14:18.240
If you weren't there, that would not have happened.
link |
00:14:20.120
And it's a positive effect, of course,
link |
00:14:21.480
you want it to be a positive effect.
link |
00:14:23.200
Do you think it's possible to engineer
link |
00:14:25.080
into computational agents, a fear of mortality?
link |
00:14:30.560
Like, does that make any sense?
link |
00:14:35.440
So, there's a very trivial thing where it's like,
link |
00:14:38.200
you could just code in a parameter,
link |
00:14:39.680
which is how long the life ends,
link |
00:14:41.320
but more of a fear of mortality,
link |
00:14:45.440
like awareness of the way that things end
link |
00:14:48.920
and somehow encoding a complex representation of that fear,
link |
00:14:54.800
which is like, maybe as it gets closer,
link |
00:14:56.960
you become more terrified.
link |
00:14:58.840
I mean, there seems to be something really profound
link |
00:15:01.600
about this fear that's not currently encodable
link |
00:15:04.820
in a trivial way into our programs.
link |
00:15:08.200
Well, I think you're referring to the emotion of fear,
link |
00:15:11.840
something, because we have cognitively,
link |
00:15:13.520
we know that we have limited lifespan
link |
00:15:16.300
and most of us cope with it by just,
link |
00:15:18.020
hey, that's what the world is like
link |
00:15:19.640
and I make the most of it.
link |
00:15:20.560
But sometimes you can have like a fear that's not healthy,
link |
00:15:26.200
that paralyzes you, that you can't do anything.
link |
00:15:29.300
And somewhere in between there,
link |
00:15:31.960
not caring at all and getting paralyzed because of fear
link |
00:15:36.160
is a normal response,
link |
00:15:37.280
which is a little bit more than just logic
link |
00:15:39.440
and it's emotion.
link |
00:15:41.440
So now the question is, what good are emotions?
link |
00:15:43.680
I mean, they are quite complex
link |
00:15:46.160
and there are multiple dimensions of emotions
link |
00:15:48.480
and they probably do serve a survival function,
link |
00:15:53.520
heightened focus, for instance.
link |
00:15:55.840
And fear of death might be a really good emotion
link |
00:15:59.680
when you are in danger, that you recognize it,
link |
00:16:02.640
even if it's not logically necessarily easy to derive
link |
00:16:06.360
and you don't have time for that logical deduction,
link |
00:16:10.400
you may be able to recognize the situation is dangerous
link |
00:16:12.720
and this fear kicks in and you all of a sudden perceive
link |
00:16:16.260
the facts that are important for that.
link |
00:16:18.480
And I think that's generally is the role of emotions.
link |
00:16:21.040
It allows you to focus what's relevant for your situation.
link |
00:16:24.540
And maybe if fear of death plays the same kind of role,
link |
00:16:27.800
but if it consumes you and it's something that you think
link |
00:16:30.600
in normal life when you don't have to,
link |
00:16:32.080
then it's not healthy and then it's not productive.
link |
00:16:34.460
Yeah, but it's fascinating to think
link |
00:16:36.640
how to incorporate emotion into a computational agent.
link |
00:16:41.760
It almost seems like a silly statement to make,
link |
00:16:45.120
but it perhaps seems silly because we have
link |
00:16:48.280
such a poor understanding of the mechanism of emotion,
link |
00:16:51.720
of fear, of, I think at the core of it
link |
00:16:56.720
is another word that we know nothing about,
link |
00:17:00.280
but say a lot, which is consciousness.
link |
00:17:03.800
Do you ever in your work, or like maybe on a coffee break,
link |
00:17:08.560
think about what the heck is this thing consciousness
link |
00:17:11.600
and is it at all useful in our thinking about AI systems?
link |
00:17:14.960
Yes, it is an important question.
link |
00:17:18.280
You can build representations and functions,
link |
00:17:23.120
I think into these agents that act like emotions
link |
00:17:26.720
and consciousness perhaps.
link |
00:17:28.620
So I mentioned emotions being something
link |
00:17:31.920
that allow you to focus and pay attention,
link |
00:17:34.200
filter out what's important.
link |
00:17:35.360
Yeah, you can have that kind of a filter mechanism
link |
00:17:38.280
and it puts you in a different state.
link |
00:17:40.320
Your computation is in a different state.
link |
00:17:42.080
Certain things don't really get through
link |
00:17:43.560
and others are heightened.
link |
00:17:46.040
Now you label that box emotion.
link |
00:17:48.460
I don't know if that means it's an emotion,
link |
00:17:49.840
but it acts very much like we understand
link |
00:17:52.520
what emotions are.
link |
00:17:54.240
And we actually did some work like that,
link |
00:17:56.900
modeling hyenas who were trying to steal a kill from lions,
link |
00:18:02.240
which happens in Africa.
link |
00:18:03.480
I mean, hyenas are quite intelligent,
link |
00:18:05.960
but not really intelligent.
link |
00:18:08.280
And they have this behavior
link |
00:18:11.560
that's more complex than anything else they do.
link |
00:18:14.040
They can band together, if there's about 30 of them or so,
link |
00:18:17.680
they can coordinate their effort
link |
00:18:20.040
so that they push the lions away from a kill.
link |
00:18:22.560
Even though the lions are so strong
link |
00:18:24.080
that they could kill a hyena by striking with a paw.
link |
00:18:28.440
But when they work together and precisely time this attack,
link |
00:18:31.640
the lions will leave and they get the kill.
link |
00:18:34.080
And probably there are some states
link |
00:18:38.880
like emotions that the hyenas go through.
link |
00:18:40.840
The first, they call for reinforcements.
link |
00:18:43.640
They really want that kill, but there's not enough of them.
link |
00:18:45.660
So they vocalize and there's more people,
link |
00:18:48.480
more hyenas that come around.
link |
00:18:50.920
And then they have two emotions.
link |
00:18:52.280
They're very afraid of the lion, so they want to stay away,
link |
00:18:55.600
but they also have a strong affiliation between each other.
link |
00:18:59.800
And then this is the balance of the two emotions.
link |
00:19:02.140
And also, yes, they also want the kill.
link |
00:19:04.840
So it's both repelled and attractive.
link |
00:19:07.320
But then this affiliation eventually is so strong
link |
00:19:10.600
that when they move, they move together,
link |
00:19:12.240
they act as a unit and they can perform that function.
link |
00:19:15.360
So there's an interesting behavior
link |
00:19:18.400
that seems to depend on these emotions strongly
link |
00:19:21.360
and makes it possible, coordinate the actions.
link |
00:19:24.280
And I think a critical aspect of that,
link |
00:19:28.880
the way you're describing is emotion there
link |
00:19:30.560
is a mechanism of social communication,
link |
00:19:34.320
of a social interaction.
link |
00:19:35.960
Maybe humans won't even be that intelligent
link |
00:19:40.520
or most things we think of as intelligent
link |
00:19:42.440
wouldn't be that intelligent without the social component
link |
00:19:45.760
of interaction.
link |
00:19:47.040
Maybe much of our intelligence
link |
00:19:48.960
is essentially an outgrowth of social interaction.
link |
00:19:52.840
And maybe for the creation of intelligent agents,
link |
00:19:55.680
we have to be creating fundamentally social systems.
link |
00:19:58.920
Yes, I strongly believe that's true.
link |
00:20:01.140
And yes, the communication is multifaceted.
link |
00:20:05.480
I mean, they vocalize and call for friends,
link |
00:20:08.080
but they also rub against each other and they push
link |
00:20:11.160
and they do all kinds of gestures and so on.
link |
00:20:14.280
So they don't act alone.
link |
00:20:15.720
And I don't think people act alone very much either,
link |
00:20:18.360
at least normal, most of the time.
link |
00:20:21.120
And social systems are so strong for humans
link |
00:20:25.040
that I think we build everything
link |
00:20:26.800
on top of these kinds of structures.
link |
00:20:28.320
And one interesting theory around that,
link |
00:20:30.880
bigotness theory, for instance, for language,
link |
00:20:32.520
but language origins is that where did language come from?
link |
00:20:36.200
And it's a plausible theory that first came social systems,
link |
00:20:41.320
that you have different roles in a society.
link |
00:20:45.180
And then those roles are exchangeable,
link |
00:20:47.400
that I scratch your back, you scratch my back,
link |
00:20:49.960
we can exchange roles.
link |
00:20:51.480
And once you have the brain structures
link |
00:20:53.480
that allow you to understand actions
link |
00:20:54.960
in terms of roles that can be changed,
link |
00:20:57.280
that's the basis for language, for grammar.
link |
00:20:59.920
And now you can start using symbols
link |
00:21:02.040
to refer to objects in the world.
link |
00:21:04.800
And you have this flexible structure.
link |
00:21:06.760
So there's a social structure
link |
00:21:09.360
that's fundamental for language to develop.
link |
00:21:12.460
Now, again, then you have language,
link |
00:21:13.960
you can refer to things that are not here right now.
link |
00:21:17.400
And that allows you to then build all the good stuff
link |
00:21:20.920
about planning, for instance, and building things and so on.
link |
00:21:24.640
So yeah, I think that very strongly humans are social
link |
00:21:28.280
and that gives us ability to structure the world.
link |
00:21:33.000
But also as a society, we can do so much more
link |
00:21:35.520
because one person does not have to do everything.
link |
00:21:38.000
You can have different roles
link |
00:21:39.800
and together achieve a lot more.
link |
00:21:41.720
And that's also something
link |
00:21:42.880
we see in computational simulations today.
link |
00:21:44.840
I mean, we have multi agent systems that can perform tasks.
link |
00:21:47.800
This fascinating demonstration, Marco Dorego,
link |
00:21:50.640
I think it was, these little robots
link |
00:21:53.160
that had to navigate through an environment
link |
00:21:54.760
and there were things that are dangerous,
link |
00:21:57.700
like maybe a big chasm or some kind of groove, a hole,
link |
00:22:02.160
and they could not get across it.
link |
00:22:03.560
But if they grab each other with their gripper,
link |
00:22:06.440
they formed a robot that was much longer under the team
link |
00:22:09.880
and this way they could get across that.
link |
00:22:12.320
So this is a great example of how together
link |
00:22:15.780
we can achieve things we couldn't otherwise.
link |
00:22:17.400
Like the hyenas, you know, alone they couldn't,
link |
00:22:19.720
but as a team they could.
link |
00:22:21.400
And I think humans do that all the time.
link |
00:22:23.160
We're really good at that.
link |
00:22:24.800
Yeah, and the way you described the system of hyenas,
link |
00:22:27.960
it almost sounds algorithmic.
link |
00:22:29.720
Like the problem with humans is they're so complex,
link |
00:22:32.800
it's hard to think of them as algorithms.
link |
00:22:35.000
But with hyenas, there's a, it's simple enough
link |
00:22:39.040
to where it feels like, at least hopeful
link |
00:22:42.620
that it's possible to create computational systems
link |
00:22:46.560
that mimic that.
link |
00:22:48.580
Yeah, that's exactly why we looked at that.
link |
00:22:51.960
As opposed to humans.
link |
00:22:54.080
Like I said, they are intelligent,
link |
00:22:55.240
but they are not quite as intelligent as say, baboons,
link |
00:22:59.520
which would learn a lot and would be much more flexible.
link |
00:23:02.120
The hyenas are relatively rigid in what they can do.
link |
00:23:05.640
And therefore you could look at this behavior,
link |
00:23:08.080
like this is a breakthrough in evolution about to happen.
link |
00:23:11.520
That they've discovered something about social structures,
link |
00:23:14.680
communication, about cooperation,
link |
00:23:17.520
and it might then spill over to other things too
link |
00:23:20.560
in thousands of years in the future.
link |
00:23:22.640
Yeah, I think the problem with baboons and humans
link |
00:23:24.920
is probably too much is going on inside the head.
link |
00:23:27.840
We won't be able to measure it if we're observing the system.
link |
00:23:30.320
With hyenas, it's probably easier to observe
link |
00:23:34.240
the actual decision making and the various motivations
link |
00:23:37.640
that are involved.
link |
00:23:38.640
Yeah, they are visible.
link |
00:23:40.000
And we can even quantify possibly their emotional state
link |
00:23:45.080
because they leave droppings behind.
link |
00:23:48.160
And there are chemicals there that can be associated
link |
00:23:50.760
with neurotransmitters.
link |
00:23:52.920
And we can separate what emotions they might have
link |
00:23:55.680
experienced in the last 24 hours.
link |
00:23:58.360
Yeah.
link |
00:23:59.360
What to you is the most beautiful, speaking of hyenas,
link |
00:24:04.000
what to you is the most beautiful nature inspired algorithm
link |
00:24:08.000
in your work that you've come across?
link |
00:24:09.720
Something maybe early on in your work or maybe today?
link |
00:24:14.000
I think evolution computation is the most amazing method.
link |
00:24:19.120
So what fascinates me most is that with computers
link |
00:24:23.640
is that you can get more out than you put in.
link |
00:24:26.920
I mean, you can write a piece of code
link |
00:24:29.200
and your machine does what you told it.
link |
00:24:31.880
I mean, this happened to me in my freshman year.
link |
00:24:34.720
It did something very simple and I was just amazed.
link |
00:24:37.080
I was blown away that it would get the number
link |
00:24:39.640
and it would compute the result.
link |
00:24:41.520
And I didn't have to do it myself.
link |
00:24:43.400
Very simple.
link |
00:24:44.480
But if you push that a little further,
link |
00:24:46.880
you can have machines that learn and they might learn patterns.
link |
00:24:50.880
And already say deep learning neural networks,
link |
00:24:53.960
they can learn to recognize objects, sounds,
link |
00:24:58.000
patterns that humans have trouble with.
link |
00:25:00.400
And sometimes they do it better than humans.
link |
00:25:02.480
And that's so fascinating.
link |
00:25:04.200
And now if you take that one more step,
link |
00:25:06.080
you get something like evolutionary algorithms
link |
00:25:08.120
that discover things, they create things,
link |
00:25:10.440
they come up with solutions that you did not think of.
link |
00:25:13.400
And that just blows me away.
link |
00:25:15.120
It's so great that we can build systems, algorithms
link |
00:25:18.600
that can be in some sense smarter than we are,
link |
00:25:21.480
that they can discover solutions that we might miss.
link |
00:25:24.840
A lot of times it is because we have as humans,
link |
00:25:26.600
we have certain biases,
link |
00:25:27.840
we expect the solutions to be certain way
link |
00:25:30.000
and you don't put those biases into the algorithm
link |
00:25:32.200
so they are more free to explore.
link |
00:25:34.040
And evolution is just absolutely fantastic explorer.
link |
00:25:37.720
And that's what really is fascinating.
link |
00:25:40.320
Yeah, I think I get made fun of a bit
link |
00:25:43.760
because I currently don't have any kids,
link |
00:25:45.840
but you mentioned programs.
link |
00:25:47.640
I mean, do you have kids?
link |
00:25:50.680
Yeah.
link |
00:25:51.520
So maybe you could speak to this,
link |
00:25:52.640
but there's a magic to the creative process.
link |
00:25:55.600
Like with Spot, the Boston Dynamics Spot,
link |
00:25:59.760
but really any robot that I've ever worked on,
link |
00:26:02.400
it just feels like the similar kind of joy
link |
00:26:04.480
I imagine I would have as a father.
link |
00:26:06.560
Not the same perhaps level,
link |
00:26:08.360
but like the same kind of wonderment.
link |
00:26:10.160
Like there's exactly this,
link |
00:26:11.880
which is like you know what you had to do initially
link |
00:26:17.760
to get this thing going.
link |
00:26:19.520
Let's speak on the computer science side,
link |
00:26:21.680
like what the program looks like,
link |
00:26:23.840
but something about it doing more
link |
00:26:27.880
than what the program was written on paper
link |
00:26:30.880
is like that somehow connects to the magic
link |
00:26:34.680
of this entire universe.
link |
00:26:36.120
Like that's like, I feel like I found God.
link |
00:26:39.200
Every time I like, it's like,
link |
00:26:42.080
because you've really created something that's living.
link |
00:26:45.640
Yeah.
link |
00:26:46.480
Even if it's a simple program.
link |
00:26:47.320
It has a life of its own, it has the intelligence of its own.
link |
00:26:48.720
It's beyond what you actually thought.
link |
00:26:51.040
Yeah.
link |
00:26:51.880
And that is, I think it's exactly spot on.
link |
00:26:53.400
That's exactly what it's about.
link |
00:26:55.480
You created something and it has a ability
link |
00:26:57.800
to live its life and do good things
link |
00:27:00.920
and you just gave it a starting point.
link |
00:27:03.240
So in that sense, I think it's,
link |
00:27:04.400
that may be part of the joy actually.
link |
00:27:06.440
But you mentioned creativity in this context,
link |
00:27:11.000
especially in the context of evolutionary computation.
link |
00:27:14.120
So, we don't often think of algorithms as creative.
link |
00:27:18.360
So how do you think about creativity?
link |
00:27:21.280
Yeah, algorithms absolutely can be creative.
link |
00:27:24.960
They can come up with solutions that you don't think about.
link |
00:27:28.320
I mean, creativity can be defined.
link |
00:27:29.760
A couple of requirements has to be new.
link |
00:27:32.680
It has to be useful and it has to be surprising.
link |
00:27:35.320
And those certainly are true with, say,
link |
00:27:38.000
evolutionary computation discovering solutions.
link |
00:27:41.560
So maybe an example, for instance,
link |
00:27:44.320
we did this collaboration with MIT Media Lab,
link |
00:27:47.480
Caleb Harbus Lab, where they had
link |
00:27:50.760
a hydroponic food computer, they called it,
link |
00:27:54.560
environment that was completely computer controlled,
link |
00:27:56.920
nutrients, water, light, temperature,
link |
00:27:59.520
everything is controlled.
link |
00:28:00.880
Now, what do you do if you can control everything?
link |
00:28:05.560
Farmers know a lot about how to make plants grow
link |
00:28:08.880
in their own patch of land.
link |
00:28:10.280
But if you can control everything, it's too much.
link |
00:28:13.120
And it turns out that we don't actually
link |
00:28:14.600
know very much about it.
link |
00:28:16.040
So we built a system, evolutionary optimization system,
link |
00:28:20.320
together with a surrogate model of how plants grow
link |
00:28:23.680
and let this system explore recipes on its own.
link |
00:28:28.680
And initially, we were focusing on light,
link |
00:28:32.040
how strong, what wavelengths, how long the light was on.
link |
00:28:36.800
And we put some boundaries which we thought were reasonable.
link |
00:28:40.120
For instance, that there was at least six hours of darkness,
link |
00:28:44.320
like night, because that's what we have in the world.
link |
00:28:47.120
And very quickly, the system, evolution,
link |
00:28:51.000
pushed all the recipes to that limit.
link |
00:28:54.120
We were trying to grow basil.
link |
00:28:55.880
And we initially had some 200, 300 recipes,
link |
00:29:00.000
exploration as well as known recipes.
link |
00:29:02.160
But now we are going beyond that.
link |
00:29:04.040
And everything was pushed to that limit.
link |
00:29:06.440
So we look at it and say, well, we can easily just change it.
link |
00:29:09.280
Let's have it your way.
link |
00:29:10.720
And it turns out the system discovered
link |
00:29:13.440
that basil does not need to sleep.
link |
00:29:16.720
24 hours, lights on, and it will thrive.
link |
00:29:19.440
It will be bigger, it will be tastier.
link |
00:29:21.320
And this was a big surprise, not just to us,
link |
00:29:24.480
but also the biologists in the team
link |
00:29:26.840
that anticipated that there are some constraints
link |
00:29:30.520
that are in the world for a reason.
link |
00:29:32.800
It turns out that evolution did not have the same bias.
link |
00:29:36.000
And therefore, it discovered something that was creative.
link |
00:29:38.760
It was surprising, it was useful, and it was new.
link |
00:29:41.320
That's fascinating to think about the things we think
link |
00:29:44.360
that are fundamental to living systems on Earth today,
link |
00:29:48.200
whether they're actually fundamental
link |
00:29:49.720
or they somehow fit the constraints of the system.
link |
00:29:53.680
And all we have to do is just remove the constraints.
link |
00:29:56.480
Do you ever think about,
link |
00:29:59.320
I don't know how much you know
link |
00:30:00.320
about brain computer interfaces in your link.
link |
00:30:03.280
The idea there is our brains are very limited.
link |
00:30:08.480
And if we just allow, we plug in,
link |
00:30:11.840
we provide a mechanism for a computer
link |
00:30:13.720
to speak with the brain.
link |
00:30:15.080
So you're thereby expanding
link |
00:30:16.880
the computational power of the brain.
link |
00:30:19.240
The possibilities there,
link |
00:30:21.200
from a very high level philosophical perspective,
link |
00:30:25.560
is limitless.
link |
00:30:27.000
But I wonder how limitless it is.
link |
00:30:30.680
Are the constraints we have features
link |
00:30:33.440
that are fundamental to our intelligence?
link |
00:30:36.040
Or is this just this weird constraint
link |
00:30:38.440
in terms of our brain size and skull
link |
00:30:40.640
and lifespan and senses?
link |
00:30:44.480
It's just the weird little quirk of evolution.
link |
00:30:47.840
And if we just open that up,
link |
00:30:49.400
like add much more senses,
link |
00:30:51.480
add much more computational power,
link |
00:30:53.680
the intelligence will expand exponentially.
link |
00:30:57.840
Do you have a sense about constraints,
link |
00:31:03.320
the relationship of evolution and computation
link |
00:31:05.360
to the constraints of the environment?
link |
00:31:09.800
Well, at first I'd like to comment on that,
link |
00:31:12.400
like changing the inputs to human brain.
link |
00:31:16.000
And flexibility of the brain.
link |
00:31:18.320
I think there's a lot of that.
link |
00:31:20.720
There are experiments that are done in animals
link |
00:31:22.360
like Mikangazuru at MIT,
link |
00:31:25.000
switching the auditory and visual information
link |
00:31:29.200
and going to the wrong part of the cortex.
link |
00:31:31.480
And the animal was still able to hear
link |
00:31:34.120
and perceive the visual environment.
link |
00:31:36.480
And there are kids that are born with severe disorders
link |
00:31:41.120
and sometimes they have to remove half of the brain,
link |
00:31:43.960
like one half, and they still grow up.
link |
00:31:46.120
They have the functions migrate to the other parts.
link |
00:31:48.320
There's a lot of flexibility like that.
link |
00:31:50.360
So I think it's quite possible to hook up the brain
link |
00:31:55.000
with different kinds of sensors, for instance,
link |
00:31:57.600
and something that we don't even quite understand
link |
00:32:00.280
or have today on different kinds of wavelengths
link |
00:32:02.520
or whatever they are.
link |
00:32:04.640
And then the brain can learn to make sense of it.
link |
00:32:07.000
And that I think is this good hope
link |
00:32:09.960
that these prosthetic devices, for instance, work,
link |
00:32:12.720
not because we make them so good and so easy to use,
link |
00:32:15.720
but the brain adapts to them
link |
00:32:17.080
and can learn to take advantage of them.
link |
00:32:20.400
And so in that sense, if there's a trouble, a problem,
link |
00:32:23.440
I think the brain can be used to correct it.
link |
00:32:26.200
Now going beyond what we have today, can you get smarter?
link |
00:32:29.200
That's really much harder to do.
link |
00:32:31.560
Giving the brain more input probably might overwhelm it.
link |
00:32:35.520
It would have to learn to filter it and focus
link |
00:32:39.720
and in order to use the information effectively
link |
00:32:43.320
and augmenting intelligence
link |
00:32:46.600
with some kind of external devices like that
link |
00:32:49.080
might be difficult, I think.
link |
00:32:51.560
But replacing what's lost, I think is quite possible.
link |
00:32:55.680
Right, so our intuition allows us to sort of imagine
link |
00:32:59.360
that we can replace what's been lost,
link |
00:33:01.400
but expansion beyond what we have,
link |
00:33:03.480
I mean, we're already one of the most,
link |
00:33:05.360
if not the most intelligent things on this earth, right?
link |
00:33:07.800
So it's hard to imagine.
link |
00:33:09.600
But if the brain can hold up with an order of magnitude
link |
00:33:14.840
greater set of information thrown at it,
link |
00:33:18.080
if it can do, if it can reason through that.
link |
00:33:20.720
Part of me, this is the Russian thing, I think,
link |
00:33:22.560
is I tend to think that the limitations
link |
00:33:25.400
is where the superpower is,
link |
00:33:27.680
that immortality and a huge increase in bandwidth
link |
00:33:32.680
of information by connecting computers with the brain
link |
00:33:37.120
is not going to produce greater intelligence.
link |
00:33:39.680
It might produce lesser intelligence.
link |
00:33:41.320
So I don't know, there's something about the scarcity
link |
00:33:45.080
being essential to fitness or performance,
link |
00:33:52.200
but that could be just because we're so limited.
link |
00:33:56.040
No, exactly, you make do with what you have,
link |
00:33:57.760
but you don't have to be a genius
link |
00:34:00.720
but you don't have to pipe it directly to the brain.
link |
00:34:04.360
I mean, we already have devices like phones
link |
00:34:07.640
where we can look up information at any point.
link |
00:34:10.240
And that can make us more productive.
link |
00:34:12.400
You don't have to argue about, I don't know,
link |
00:34:14.120
what happened in that baseball game or whatever it is,
link |
00:34:16.480
because you can look it up right away.
link |
00:34:17.800
And I think in that sense, we can learn to utilize tools.
link |
00:34:22.160
And that's what we have been doing for a long, long time.
link |
00:34:27.000
And we are already, the brain is already drinking
link |
00:34:29.120
the water, firehose, like vision.
link |
00:34:32.360
There's way more information in vision
link |
00:34:34.480
that we actually process.
link |
00:34:35.640
So brain is already good at identifying what matters.
link |
00:34:39.840
And that we can switch that from vision
link |
00:34:42.840
to some other wavelength or some other kind of modality.
link |
00:34:44.960
But I think that the same processing principles
link |
00:34:47.040
probably still apply.
link |
00:34:49.000
But also indeed this ability to have information
link |
00:34:53.680
more accessible and more relevant,
link |
00:34:55.320
I think can enhance what we do.
link |
00:34:57.680
I mean, kids today at school, they learn about DNA.
link |
00:35:00.880
I mean, things that were discovered
link |
00:35:02.560
just a couple of years ago.
link |
00:35:04.560
And it's already common knowledge
link |
00:35:06.400
and we are building on it.
link |
00:35:07.520
And we don't see a problem where
link |
00:35:12.400
there's too much information that we can absorb and learn.
link |
00:35:15.080
Maybe people become a little bit more narrow
link |
00:35:17.480
in what they know, they are in one field.
link |
00:35:20.840
But this information that we have accumulated,
link |
00:35:23.680
it is passed on and people are picking up on it
link |
00:35:26.080
and they are building on it.
link |
00:35:27.480
So it's not like we have reached the point of saturation.
link |
00:35:30.960
We have still this process that allows us to be selective
link |
00:35:34.440
and decide what's interesting, I think still works
link |
00:35:37.520
even with the more information we have today.
link |
00:35:40.040
Yeah, it's fascinating to think about
link |
00:35:43.080
like Wikipedia becoming a sensor.
link |
00:35:45.240
Like, so the fire hose of information from Wikipedia.
link |
00:35:49.000
So it's like you integrated directly into the brain
link |
00:35:51.720
to where you're thinking, like you're observing the world
link |
00:35:54.160
with all of Wikipedia directly piping into your brain.
link |
00:35:57.760
So like when I see a light,
link |
00:35:59.840
I immediately have like the history of who invented
link |
00:36:03.560
electricity, like integrated very quickly into.
link |
00:36:07.480
So just the way you think about the world
link |
00:36:09.800
might be very interesting
link |
00:36:11.160
if you can integrate that kind of information.
link |
00:36:13.200
What are your thoughts, if I could ask on early steps
link |
00:36:18.960
on the Neuralink side?
link |
00:36:20.280
I don't know if you got a chance to see,
link |
00:36:21.440
but there was a monkey playing pong
link |
00:36:25.880
through the brain computer interface.
link |
00:36:27.760
And the dream there is sort of,
link |
00:36:30.600
you're already replacing the thumbs essentially
link |
00:36:33.680
that you would use to play video game.
link |
00:36:35.840
The dream is to be able to increase further
link |
00:36:40.760
the interface by which you interact with the computer.
link |
00:36:43.400
Are you impressed by this?
link |
00:36:44.600
Are you worried about this?
link |
00:36:46.400
What are your thoughts as a human?
link |
00:36:47.920
I think it's wonderful.
link |
00:36:48.840
I think it's great that we could do something
link |
00:36:51.280
like that.
link |
00:36:52.120
I mean, there are devices that read your EEG for instance,
link |
00:36:56.160
and humans can learn to control things
link |
00:37:00.120
using just their thoughts in that sense.
link |
00:37:02.760
And I don't think it's that different.
link |
00:37:04.920
I mean, those signals would go to limbs,
link |
00:37:06.720
they would go to thumbs.
link |
00:37:08.320
Now the same signals go through a sensor
link |
00:37:11.200
to some computing system.
link |
00:37:13.760
It still probably has to be built on human terms,
link |
00:37:17.520
not to overwhelm them, but utilize what's there
link |
00:37:20.000
and sense the right kind of patterns
link |
00:37:23.720
that are easy to generate.
link |
00:37:24.840
But, oh, that I think is really quite possible
link |
00:37:27.760
and wonderful and could be very much more efficient.
link |
00:37:32.160
Is there, so you mentioned surprising
link |
00:37:34.160
being a characteristic of creativity.
link |
00:37:37.080
Is there something, you already mentioned a few examples,
link |
00:37:39.800
but is there something that jumps out at you
link |
00:37:41.920
as was particularly surprising
link |
00:37:44.560
from the various evolutionary computation systems
link |
00:37:48.680
you've worked on, the solutions that were
link |
00:37:52.840
come up along the way?
link |
00:37:53.920
Not necessarily the final solutions,
link |
00:37:55.280
but maybe things that would even discarded.
link |
00:37:58.680
Is there something that just jumps to mind?
link |
00:38:00.360
It happens all the time.
link |
00:38:02.200
I mean, evolution is so creative,
link |
00:38:05.640
so good at discovering solutions you don't anticipate.
link |
00:38:09.280
A lot of times they are taking advantage of something
link |
00:38:12.680
that you didn't think was there,
link |
00:38:13.800
like a bug in the software, for instance.
link |
00:38:15.960
A lot of, there's a great paper,
link |
00:38:17.600
the community put it together
link |
00:38:19.120
about surprising anecdotes about evolutionary computation.
link |
00:38:22.920
A lot of them are indeed, in some software environment,
link |
00:38:25.640
there was a loophole or a bug
link |
00:38:28.120
and the system utilizes that.
link |
00:38:30.560
By the way, for people who want to read it,
link |
00:38:31.960
it's kind of fun to read.
link |
00:38:33.080
It's called The Surprising Creativity of Digital Evolution,
link |
00:38:36.080
a collection of anecdotes from the evolutionary computation
link |
00:38:39.320
and artificial life research communities.
link |
00:38:41.560
And there's just a bunch of stories
link |
00:38:43.160
from all the seminal figures in this community.
link |
00:38:45.840
You have a story in there that released to you,
link |
00:38:48.520
at least on the Tic Tac Toe memory bomb.
link |
00:38:51.000
So can you, I guess, describe that situation
link |
00:38:54.760
if you think that's still?
link |
00:38:55.720
Yeah, that's a quite a bit smaller scale
link |
00:38:59.640
than our basic doesn't need to sleep surprise,
link |
00:39:03.040
but it was actually done by students in my class,
link |
00:39:06.640
in a neural nets evolution computation class.
link |
00:39:09.440
There was an assignment.
link |
00:39:11.840
It was perhaps a final project
link |
00:39:13.880
where people built game playing AI, it was an AI class.
link |
00:39:19.400
And this one, and it was for Tic Tac Toe
link |
00:39:21.920
or five in a row in a large board.
link |
00:39:24.560
And this one team evolved a neural network
link |
00:39:28.160
to make these moves.
link |
00:39:29.920
And they set it up, the evolution.
link |
00:39:32.720
They didn't really know what would come out,
link |
00:39:35.240
but it turned out that they did really well.
link |
00:39:37.000
Evolution actually won the tournament.
link |
00:39:38.840
And most of the time when it won,
link |
00:39:40.520
it won because the other teams crashed.
link |
00:39:43.480
And then when we look at it, like what was going on
link |
00:39:45.760
was that evolution discovered that if it makes a move
link |
00:39:48.240
that's really, really far away,
link |
00:39:49.960
like millions of squares away,
link |
00:39:53.440
the other teams, the other programs has expanded memory
link |
00:39:57.800
in order to take that into account
link |
00:39:59.160
until they run out of memory and crashed.
link |
00:40:01.200
And then you win a tournament
link |
00:40:03.200
by crashing all your opponents.
link |
00:40:05.720
I think that's quite a profound example,
link |
00:40:08.920
which probably applies to most games,
link |
00:40:14.560
from even a game theoretic perspective,
link |
00:40:16.920
that sometimes to win, you don't have to be better
link |
00:40:20.480
within the rules of the game.
link |
00:40:22.680
You have to come up with ways to break your opponent's brain,
link |
00:40:28.480
if it's a human, like not through violence,
link |
00:40:31.360
but through some hack where the brain just is not,
link |
00:40:34.640
you're basically, how would you put it?
link |
00:40:39.280
You're going outside the constraints
link |
00:40:43.120
of where the brain is able to function.
link |
00:40:45.160
Expectations of your opponent.
link |
00:40:46.560
I mean, this was even Kasparov pointed that out
link |
00:40:49.600
that when Deep Blue was playing against Kasparov,
link |
00:40:51.800
that it was not playing the same way as Kasparov expected.
link |
00:40:55.440
And this has to do with not having the same biases.
link |
00:40:59.760
And that's really one of the strengths of the AI approach.
link |
00:41:06.280
Can you at a high level say,
link |
00:41:08.080
what are the basic mechanisms
link |
00:41:10.360
of evolutionary computation algorithms
link |
00:41:12.760
that use something that could be called
link |
00:41:15.760
an evolutionary approach?
link |
00:41:17.680
Like how does it work?
link |
00:41:19.600
What are the connections to the,
link |
00:41:21.680
what are the echoes of the connection to his biological?
link |
00:41:24.800
A lot of these algorithms really do take motivation
link |
00:41:27.080
from biology, but they are caricatures.
link |
00:41:29.560
You try to essentialize it
link |
00:41:31.280
and take the elements that you believe matter.
link |
00:41:33.600
So in evolutionary computation,
link |
00:41:35.880
it is the creation of variation
link |
00:41:38.040
and then the selection upon that.
link |
00:41:40.680
So the creation of variation,
link |
00:41:41.840
you have to have some mechanism
link |
00:41:43.080
that allow you to create new individuals
link |
00:41:44.720
that are very different from what you already have.
link |
00:41:47.080
That's the creativity part.
link |
00:41:48.800
And then you have to have some way of measuring
link |
00:41:50.720
how well they are doing and using that measure to select
link |
00:41:55.520
who goes to the next generation and you continue.
link |
00:41:58.160
So first you also, you have to have
link |
00:42:00.240
some kind of digital representation of an individual
link |
00:42:03.160
that can be then modified.
link |
00:42:04.520
So I guess humans in biological systems
link |
00:42:07.360
have DNA and all those kinds of things.
link |
00:42:09.720
And so you have to have similar kind of encodings
link |
00:42:12.160
in a computer program.
link |
00:42:13.400
Yes, and that is a big question.
link |
00:42:15.040
How do you encode these individuals?
link |
00:42:16.960
So there's a genotype, which is that encoding
link |
00:42:19.560
and then a decoding mechanism gives you the phenotype,
link |
00:42:23.040
which is the actual individual that then performs the task
link |
00:42:26.400
and in an environment can be evaluated how good it is.
link |
00:42:31.280
So even that mapping is a big question
link |
00:42:33.160
and how do you do it?
link |
00:42:34.960
But typically the representations are,
link |
00:42:37.080
either they are strings of numbers
link |
00:42:38.600
or they are some kind of trees.
link |
00:42:39.760
Those are something that we know very well
link |
00:42:41.760
in computer science and we try to do that.
link |
00:42:43.560
But they, and DNA in some sense is also a sequence
link |
00:42:48.040
and it's a string.
link |
00:42:50.600
So it's not that far from it,
link |
00:42:52.040
but DNA also has many other aspects
link |
00:42:54.880
that we don't take into account necessarily
link |
00:42:56.720
like there's folding and interactions
link |
00:43:00.040
that are other than just the sequence itself.
link |
00:43:03.600
And lots of that is not yet captured
link |
00:43:06.000
and we don't know whether they are really crucial.
link |
00:43:10.120
Evolution, biological evolution has produced
link |
00:43:12.600
wonderful things, but if you look at them,
link |
00:43:16.000
it's not necessarily the case that every piece
link |
00:43:18.560
is irreplaceable and essential.
link |
00:43:20.880
There's a lot of baggage because you have to construct it
link |
00:43:23.680
and it has to go through various stages
link |
00:43:25.360
and we still have appendix and we have tail bones
link |
00:43:29.360
and things like that that are not really that useful.
link |
00:43:31.360
If you try to explain them now,
link |
00:43:33.400
it would make no sense, very hard.
link |
00:43:35.200
But if you think of us as productive evolution,
link |
00:43:38.200
you can see where they came from.
link |
00:43:39.240
They were useful at one point perhaps
link |
00:43:41.280
and no longer are, but they're still there.
link |
00:43:43.400
So that process is complex
link |
00:43:47.080
and your representation should support it.
link |
00:43:50.800
And that is quite difficult if we are limited
link |
00:43:56.320
with strings or trees,
link |
00:43:59.000
and then we are pretty much limited
link |
00:44:01.840
what can be constructed.
link |
00:44:03.760
And one thing that we are still missing
link |
00:44:05.640
in evolutionary computation in particular
link |
00:44:07.560
is what we saw in biology, major transitions.
link |
00:44:11.440
So that you go from, for instance,
link |
00:44:13.840
single cell to multi cell organisms
link |
00:44:16.080
and eventually societies.
link |
00:44:17.200
There are transitions of level of selection
link |
00:44:19.640
and level of what a unit is.
link |
00:44:22.120
And that's something we haven't captured
link |
00:44:24.240
in evolutionary computation yet.
link |
00:44:26.080
Does that require a dramatic expansion
link |
00:44:28.680
of the representation?
link |
00:44:30.040
Is that what that is?
link |
00:44:31.680
Most likely it does, but it's quite,
link |
00:44:34.480
we don't even understand it in biology very well
link |
00:44:36.920
where it's coming from.
link |
00:44:37.760
So it would be really good to look at major transitions
link |
00:44:40.560
in biology, try to characterize them
link |
00:44:42.600
a little bit more in detail, what the processes are.
link |
00:44:45.400
How does a, so like a unit, a cell is no longer
link |
00:44:49.800
evaluated alone.
link |
00:44:50.760
It's evaluated as part of a community,
link |
00:44:52.800
a multi cell organism.
link |
00:44:54.760
Even though it could reproduce, now it can't alone.
link |
00:44:57.320
It has to have that environment.
link |
00:44:59.360
So there's a push to another level, at least a selection.
link |
00:45:03.400
And how do you make that jump to the next level?
link |
00:45:04.760
Yes, how do you make the jump?
link |
00:45:06.080
As part of the algorithm.
link |
00:45:07.280
Yeah, yeah.
link |
00:45:08.200
So we haven't really seen that in computation yet.
link |
00:45:12.080
And there are certainly attempts to have open ended evolution.
link |
00:45:15.800
Things that could add more complexity
link |
00:45:18.400
and start selecting at a higher level.
link |
00:45:20.840
But it is still not quite the same
link |
00:45:24.680
as going from single to multi to society,
link |
00:45:27.080
for instance, in biology.
link |
00:45:29.000
So there essentially would be,
link |
00:45:31.720
as opposed to having one agent,
link |
00:45:33.400
those agent all of a sudden spontaneously decide
link |
00:45:36.240
to then be together.
link |
00:45:38.360
And then your entire system would then be treating them
link |
00:45:42.360
as one agent.
link |
00:45:43.560
Something like that.
link |
00:45:44.680
Some kind of weird merger building.
link |
00:45:46.320
But also, so you mentioned,
link |
00:45:47.960
I think you mentioned selection.
link |
00:45:49.160
So basically there's an agent and they don't get to live on
link |
00:45:53.240
if they don't do well.
link |
00:45:54.200
So there's some kind of measure of what doing well is
link |
00:45:56.320
and isn't.
link |
00:45:57.280
And does mutation come into play at all in the process
link |
00:46:02.880
and what in the world does it serve?
link |
00:46:04.160
Yeah, so, and again, back to what the computational
link |
00:46:07.080
mechanisms of evolution computation are.
link |
00:46:08.640
So the way to create variation,
link |
00:46:12.720
you can take multiple individuals, two usually,
link |
00:46:15.120
but you could do more.
link |
00:46:17.200
And you exchange the parts of the representation.
link |
00:46:20.840
You do some kind of recombination.
link |
00:46:22.680
Could be crossover, for instance.
link |
00:46:25.800
In biology, you do have DNA strings that are cut
link |
00:46:30.040
and put together again.
link |
00:46:32.080
We could do something like that.
link |
00:46:34.280
And it seems to be that in biology, the crossover
link |
00:46:37.400
is really the workhorse in biological evolution.
link |
00:46:42.080
In computation, we tend to rely more on mutation.
link |
00:46:47.000
And that is making random changes
link |
00:46:50.080
into parts of the chromosome.
link |
00:46:51.280
You can try to be intelligent and target certain areas
link |
00:46:55.000
of it and make the mutations also follow some principle.
link |
00:47:00.000
Like you collect statistics of performance and correlations
link |
00:47:03.480
and try to make mutations you believe
link |
00:47:05.080
are going to be helpful.
link |
00:47:06.800
That's where evolution computation has moved
link |
00:47:09.360
in the last 20 years.
link |
00:47:11.080
I mean, evolution computation has been around for 50 years,
link |
00:47:12.920
but a lot of the recent...
link |
00:47:15.160
Success comes from mutation.
link |
00:47:16.560
Yes, comes from using statistics.
link |
00:47:19.240
It's like the rest of machine learning based on statistics.
link |
00:47:22.040
We use similar tools to guide evolution computation.
link |
00:47:25.000
And in that sense, it has diverged a bit
link |
00:47:27.680
from biological evolution.
link |
00:47:30.040
And that's one of the things I think we could look at again,
link |
00:47:33.640
having a weaker selection, more crossover,
link |
00:47:37.840
large populations, more time,
link |
00:47:40.160
and maybe a different kind of creativity
link |
00:47:42.200
would come out of it.
link |
00:47:43.320
We are very impatient in evolution computation today.
link |
00:47:46.360
We want answers right now, right, quickly.
link |
00:47:48.920
And if somebody doesn't perform, kill it.
link |
00:47:51.600
And biological evolution doesn't work quite that way.
link |
00:47:55.840
And it's more patient.
link |
00:47:57.800
Yes, much more patient.
link |
00:48:00.000
So I guess we need to add some kind of mating,
link |
00:48:03.640
some kind of like dating mechanisms,
link |
00:48:05.920
like marriage maybe in there.
link |
00:48:07.360
So into our algorithms to improve the combination
link |
00:48:13.200
as opposed to all mutation doing all of the work.
link |
00:48:15.960
Yeah, and many ways of being successful.
link |
00:48:18.880
Usually in evolution computation, we have one goal,
link |
00:48:21.560
play this game really well compared to others.
link |
00:48:25.880
But in biology, there are many ways of being successful.
link |
00:48:28.640
You can build niches.
link |
00:48:29.720
You can be stronger, faster, larger, or smarter,
link |
00:48:34.040
or eat this or eat that.
link |
00:48:36.760
So there are many ways to solve the same problem of survival.
link |
00:48:40.560
And that then breeds creativity.
link |
00:48:43.800
And it allows more exploration.
link |
00:48:46.720
And eventually you get solutions
link |
00:48:48.680
that are perhaps more creative
link |
00:48:51.120
rather than trying to go from initial population directly
link |
00:48:54.120
or more or less directly to your maximum fitness,
link |
00:48:57.400
which you measure as just one metric.
link |
00:49:00.840
So in a broad sense, before we talk about neuroevolution,
link |
00:49:07.920
do you see evolutionary computation
link |
00:49:11.200
as more effective than deep learning in a certain context?
link |
00:49:14.160
Machine learning, broadly speaking.
link |
00:49:16.640
Maybe even supervised machine learning.
link |
00:49:18.680
I don't know if you want to draw any kind of lines
link |
00:49:21.040
and distinctions and borders
link |
00:49:23.080
where they rub up against each other kind of thing,
link |
00:49:25.400
where one is more effective than the other
link |
00:49:27.000
in the current state of things.
link |
00:49:28.440
Yes, of course, they are very different
link |
00:49:30.240
and they address different kinds of problems.
link |
00:49:32.280
And the deep learning has been really successful
link |
00:49:36.720
in domains where we have a lot of data.
link |
00:49:39.800
And that means not just data about situations,
link |
00:49:42.440
but also what the right answers were.
link |
00:49:45.120
So labeled examples, or they might be predictions,
link |
00:49:47.840
maybe weather prediction where the data itself becomes labels.
link |
00:49:51.720
What happened, what the weather was today
link |
00:49:53.160
and what it will be tomorrow.
link |
00:49:57.000
So they are very effective deep learning methods
link |
00:49:59.240
on that kind of tasks.
link |
00:50:01.400
But there are other kinds of tasks
link |
00:50:03.400
where we don't really know what the right answer is.
link |
00:50:06.360
Game playing, for instance,
link |
00:50:07.520
but many robotics tasks and actions in the world,
link |
00:50:12.840
decision making and actual practical applications,
link |
00:50:17.720
like treatments and healthcare
link |
00:50:19.480
or investment in stock market.
link |
00:50:21.400
Many tasks are like that.
link |
00:50:22.720
We don't know and we'll never know
link |
00:50:24.880
what the optimal answers were.
link |
00:50:26.680
And there you need different kinds of approach.
link |
00:50:28.640
Reinforcement learning is one of those.
link |
00:50:30.880
Reinforcement learning comes from biology as well.
link |
00:50:33.800
Agents learn during their lifetime.
link |
00:50:35.440
They eat berries and sometimes they get sick
link |
00:50:37.600
and then they don't and get stronger.
link |
00:50:40.320
And then that's how you learn.
link |
00:50:42.320
And evolution is also a mechanism like that
link |
00:50:46.080
at a different timescale because you have a population,
link |
00:50:48.920
not an individual during his lifetime,
link |
00:50:50.840
but an entire population as a whole
link |
00:50:52.560
can discover what works.
link |
00:50:55.200
And there you can afford individuals that don't work out.
link |
00:50:58.960
They will, you know, everybody dies
link |
00:51:00.600
and you have a next generation
link |
00:51:02.080
and they will be better than the previous one.
link |
00:51:04.120
So that's the big difference between these methods.
link |
00:51:07.640
They apply to different kinds of problems.
link |
00:51:10.920
And in particular, there's often a comparison
link |
00:51:15.120
that's kind of interesting and important
link |
00:51:16.640
between reinforcement learning and evolutionary computation.
link |
00:51:20.120
And initially, reinforcement learning
link |
00:51:23.400
was about individual learning during their lifetime.
link |
00:51:25.960
And evolution is more engineering.
link |
00:51:28.160
You don't care about the lifetime.
link |
00:51:29.720
You don't care about all the individuals that are tested.
link |
00:51:32.600
You only care about the final result.
link |
00:51:34.520
The last one, the best candidate that evolution produced.
link |
00:51:39.280
In that sense, they also apply to different kinds of problems.
link |
00:51:42.520
And now that boundary is starting to blur a bit.
link |
00:51:46.160
You can use evolution as an online method
link |
00:51:48.680
and reinforcement learning to create engineering solutions,
link |
00:51:51.520
but that's still roughly the distinction.
link |
00:51:55.320
And from the point of view of what algorithm you wanna use,
link |
00:52:00.320
if you have something where there is a cost for every trial,
link |
00:52:03.360
reinforcement learning might be your choice.
link |
00:52:06.120
Now, if you have a domain
link |
00:52:07.800
where you can use a surrogate perhaps,
link |
00:52:10.280
so you don't have much of a cost for trial,
link |
00:52:13.600
and you want to have surprises,
link |
00:52:16.520
you want to explore more broadly,
link |
00:52:18.680
then this population based method is perhaps a better choice
link |
00:52:23.400
because you can try things out that you wouldn't afford
link |
00:52:27.000
when you're doing reinforcement learning.
link |
00:52:28.600
There's very few things as entertaining
link |
00:52:31.720
as watching either evolutionary computation
link |
00:52:33.840
or reinforcement learning teaching a simulated robot to walk.
link |
00:52:37.360
Maybe there's a higher level question
link |
00:52:42.360
that could be asked here,
link |
00:52:43.600
but do you find this whole space of applications
link |
00:52:47.520
in the robotics interesting for evolution computation?
link |
00:52:51.720
Yeah, yeah, very much.
link |
00:52:53.480
And indeed, there are fascinating videos of that.
link |
00:52:56.440
And that's actually one of the examples
link |
00:52:58.320
where you can contrast the difference.
link |
00:53:00.520
Between reinforcement learning and evolution.
link |
00:53:03.160
Yes, so if you have a reinforcement learning agent,
link |
00:53:06.280
it tries to be conservative
link |
00:53:07.960
because it wants to walk as long as possible and be stable.
link |
00:53:11.800
But if you have evolutionary computation,
link |
00:53:13.680
it can afford these agents that go haywire.
link |
00:53:17.240
They fall flat on their face and they could take a step
link |
00:53:20.920
and then they jump and then again fall flat.
link |
00:53:23.160
And eventually what comes out of that
link |
00:53:25.200
is something like a falling that's controlled.
link |
00:53:29.120
You take another step and another step
link |
00:53:30.400
and you no longer fall.
link |
00:53:32.280
Instead you run, you go fast.
link |
00:53:34.160
So that's a way of discovering something
link |
00:53:36.520
that's hard to discover step by step incrementally.
link |
00:53:39.440
Because you can afford these evolutionist dead ends,
link |
00:53:43.640
although they are not entirely dead ends
link |
00:53:45.480
in the sense that they can serve as stepping stones.
link |
00:53:47.720
When you take two of those, put them together,
link |
00:53:49.840
you get something that works even better.
link |
00:53:52.400
And that is a great example of this kind of discovery.
link |
00:53:55.880
Yeah, learning to walk is fascinating.
link |
00:53:58.120
I talked quite a bit to Russ Tedrick who's at MIT.
link |
00:54:01.360
There's a community of folks
link |
00:54:03.400
who just roboticists who love the elegance
link |
00:54:06.600
and beauty of movement.
link |
00:54:09.720
And walking bipedal robotics is beautiful,
link |
00:54:17.480
but also exceptionally dangerous
link |
00:54:19.440
in the sense that like you're constantly falling essentially
link |
00:54:22.800
if you want to do elegant movement.
link |
00:54:25.320
And the discovery of that is,
link |
00:54:28.400
I mean, it's such a good example
link |
00:54:33.760
of that the discovery of a good solution
link |
00:54:37.440
sometimes requires a leap of faith and patience
link |
00:54:39.720
and all those kinds of things.
link |
00:54:41.440
I wonder what other spaces
link |
00:54:43.080
where you have to discover those kinds of things in.
link |
00:54:46.280
Yeah, another interesting direction
link |
00:54:48.840
is learning for virtual creatures, learning to walk.
link |
00:54:53.840
We did a study in simulation, obviously,
link |
00:54:57.640
that you create those creatures,
link |
00:55:00.280
not just their controller, but also their body.
link |
00:55:02.920
So you have cylinders, you have muscles,
link |
00:55:05.600
you have joints and sensors,
link |
00:55:08.840
and you're creating creatures that look quite different.
link |
00:55:11.680
Some of them have multiple legs.
link |
00:55:13.080
Some of them have no legs at all.
link |
00:55:15.280
And then the goal was to get them to move, to walk, to run.
link |
00:55:19.560
And what was interesting is that
link |
00:55:22.040
when you evolve the controller together with the body,
link |
00:55:26.200
you get movements that look natural
link |
00:55:28.360
because they're optimized for that physical setup.
link |
00:55:31.440
And these creatures, you start believing them
link |
00:55:33.960
that they're alive because they walk in a way
link |
00:55:35.880
that you would expect somebody
link |
00:55:37.400
with that kind of a setup to walk.
link |
00:55:39.600
Yeah, there's something subjective also about that, right?
link |
00:55:43.520
I've been thinking a lot about that,
link |
00:55:45.000
especially in the human robot interaction context.
link |
00:55:50.000
You know, I mentioned Spot, the Boston Dynamics robot.
link |
00:55:55.320
There is something about human robot communication.
link |
00:55:58.480
Let's say, let's put it in another context,
link |
00:56:00.560
something about human and dog context,
link |
00:56:05.560
like a living dog,
link |
00:56:07.400
where there's a dance of communication.
link |
00:56:10.480
First of all, the eyes, you both look at the same thing
link |
00:56:12.760
and the dogs communicate with their eyes as well.
link |
00:56:15.240
Like if you're a human,
link |
00:56:18.480
if you and a dog want to deal with a particular object,
link |
00:56:24.600
you will look at the person,
link |
00:56:26.240
the dog will look at you and then look at the object
link |
00:56:28.120
and look back at you, all those kinds of things.
link |
00:56:30.360
But there's also just the elegance of movement.
link |
00:56:33.280
I mean, there's the, of course, the tail
link |
00:56:35.840
and all those kinds of mechanisms of communication
link |
00:56:38.080
and it all seems natural and often joyful.
link |
00:56:41.920
And for robots to communicate that,
link |
00:56:45.200
it's really difficult how to figure that out
link |
00:56:47.240
because it's almost seems impossible to hard code in.
link |
00:56:50.800
You can hard code it for demo purpose or something like that,
link |
00:56:54.960
but it's essentially choreographed.
link |
00:56:58.120
Like if you watch some of the Boston Dynamics videos
link |
00:57:00.280
where they're dancing,
link |
00:57:01.760
all of that is choreographed by human beings.
link |
00:57:05.640
But to learn how to, with your movement,
link |
00:57:09.360
demonstrate a naturalness and elegance, that's fascinating.
link |
00:57:14.400
Of course, in the physical space,
link |
00:57:15.720
that's very difficult to do to learn the kind of scale
link |
00:57:18.960
that you're referring to,
link |
00:57:20.080
but the hope is that you could do that in simulation
link |
00:57:23.080
and then transfer it into the physical space
link |
00:57:25.360
if you're able to model the robot sufficiently naturally.
link |
00:57:28.680
Yeah, and sometimes I think that that requires
link |
00:57:31.680
a theory of mind on the side of the robot
link |
00:57:35.000
that they understand what you're doing
link |
00:57:38.920
because they themselves are doing something similar.
link |
00:57:41.440
And that's a big question too.
link |
00:57:44.360
We talked about intelligence in general
link |
00:57:47.400
and the social aspect of intelligence.
link |
00:57:50.040
And I think that's what is required
link |
00:57:52.040
that we humans understand other humans
link |
00:57:53.840
because we assume that they are similar to us.
link |
00:57:57.040
We have one simulation we did a while ago.
link |
00:57:59.120
Ken Stanley did that.
link |
00:58:01.440
Two robots that were competing simulation, like I said,
link |
00:58:06.600
they were foraging for food to gain energy.
link |
00:58:09.320
And then when they were really strong,
link |
00:58:10.680
they would bounce into the other robot
link |
00:58:12.680
and win if they were stronger.
link |
00:58:14.880
And we watched evolution discover
link |
00:58:17.320
more and more complex behaviors.
link |
00:58:18.920
They first went to the nearest food
link |
00:58:21.040
and then they started to plot a trajectory
link |
00:58:24.320
so they get more, but then they started to pay attention
link |
00:58:28.440
what the other robot was doing.
link |
00:58:30.280
And in the end, there was a behavior
link |
00:58:32.720
where one of the robots, the most sophisticated one,
link |
00:58:37.640
sensed where the food pieces were
link |
00:58:40.200
and identified that the other robot
link |
00:58:42.080
was close to two of a very far distance
link |
00:58:46.000
and there was one more food nearby.
link |
00:58:48.720
So it faked, now I'm using anthropomorphizing terms,
link |
00:58:53.380
but it made a move towards those other pieces
link |
00:58:55.880
in order for the other robot to actually go and get them
link |
00:58:59.080
because it knew that the last remaining piece of food
link |
00:59:02.400
was close and the other robot would have to travel
link |
00:59:04.980
a long way, lose its energy
link |
00:59:06.960
and then lose the whole competition.
link |
00:59:10.440
So there was like emergence of something
link |
00:59:12.680
like a theory of mind,
link |
00:59:13.640
knowing what the other robot would do,
link |
00:59:16.640
to guide it towards bad behavior in order to win.
link |
00:59:19.440
So we can get things like that happen in simulation as well.
link |
00:59:22.960
But that's a complete natural emergence
link |
00:59:25.280
of a theory of mind.
link |
00:59:26.120
But I feel like if you add a little bit of a place
link |
00:59:30.120
for a theory of mind to emerge like easier,
link |
00:59:34.400
then you can go really far.
link |
00:59:37.160
I mean, some of these things with evolution, you know,
link |
00:59:41.240
you add a little bit of design in there, it'll really help.
link |
00:59:45.480
And I tend to think that a very simple theory of mind
link |
00:59:50.780
will go a really long way for cooperation between agents
link |
00:59:54.880
and certainly for human robot interaction.
link |
00:59:57.520
Like it doesn't have to be super complicated.
link |
01:00:01.120
I've gotten a chance in the autonomous vehicle space
link |
01:00:03.520
to watch vehicles interact with pedestrians
link |
01:00:07.040
or pedestrians interacting with vehicles in general.
link |
01:00:09.920
I mean, you would think that there's a very complicated
link |
01:00:13.000
theory of mind thing going on, but I have a sense,
link |
01:00:15.760
it's not well understood yet,
link |
01:00:17.000
but I have a sense it's pretty dumb.
link |
01:00:19.480
Like it's pretty simple.
link |
01:00:22.320
There's a social contract there between humans,
link |
01:00:25.560
a human driver and a human crossing the road
link |
01:00:28.180
where the human crossing the road trusts
link |
01:00:32.000
that the human in the car is not going to murder them.
link |
01:00:34.600
And there's something about, again,
link |
01:00:36.360
back to that mortality thing.
link |
01:00:38.240
There's some dance of ethics and morality that's built in,
link |
01:00:45.640
that you're mapping your own morality
link |
01:00:47.600
onto the person in the car.
link |
01:00:50.040
And even if they're driving at a speed where you think
link |
01:00:54.080
if they don't stop, they're going to kill you,
link |
01:00:56.200
you trust that if you step in front of them,
link |
01:00:58.160
they're going to hit the brakes.
link |
01:00:59.440
And there's that weird dance that we do
link |
01:01:02.200
that I think is a pretty simple model,
link |
01:01:04.680
but of course it's very difficult to introspect what it is.
link |
01:01:08.480
And autonomous robots in the human robot interaction
link |
01:01:11.560
context have to build that.
link |
01:01:13.800
Current robots are much less than what you're describing.
link |
01:01:17.320
They're currently just afraid of everything.
link |
01:01:19.360
They're more, they're not the kind that fall
link |
01:01:22.560
and discover how to run.
link |
01:01:24.080
They're more like, please don't touch anything.
link |
01:01:26.800
Don't hurt anything.
link |
01:01:28.120
Stay as far away from humans as possible.
link |
01:01:30.200
Treat humans as ballistic objects that you can't,
link |
01:01:34.840
that you do with a large spatial envelope,
link |
01:01:38.760
make sure you do not collide with.
link |
01:01:40.800
That's how, like you mentioned,
link |
01:01:42.000
Elon Musk thinks about autonomous vehicles.
link |
01:01:45.360
I tend to think autonomous vehicles need to have
link |
01:01:48.100
a beautiful dance between human and machine,
link |
01:01:50.680
where it's not just the collision avoidance problem,
link |
01:01:53.320
but a weird dance.
link |
01:01:55.920
Yeah, I think these systems need to be able to predict
link |
01:02:00.000
what will happen, what the other agent is going to do,
link |
01:02:02.320
and then have a structure of what the goals are
link |
01:02:06.440
and whether those predictions actually meet the goals.
link |
01:02:08.440
And you can go probably pretty far
link |
01:02:10.860
with that relatively simple setup already,
link |
01:02:13.600
but to call it a theory of mind, I don't think you need to.
link |
01:02:16.200
I mean, it doesn't matter whether the pedestrian
link |
01:02:18.360
has a mind, it's an object,
link |
01:02:20.080
and we can predict what we will do.
link |
01:02:21.840
And then we can predict what the states will be
link |
01:02:23.720
in the future and whether they are desirable states.
link |
01:02:26.180
Stay away from those that are undesirable
link |
01:02:27.960
and go towards those that are desirable.
link |
01:02:29.720
So it's a relatively simple functional approach to that.
link |
01:02:34.520
Where do we really need the theory of mind?
link |
01:02:37.920
Maybe when you start interacting
link |
01:02:40.940
and you're trying to get the other agent to do something
link |
01:02:44.160
and jointly, so that you can jointly,
link |
01:02:46.480
collaboratively achieve something,
link |
01:02:48.380
then it becomes more complex.
link |
01:02:50.560
Well, I mean, even with the pedestrians,
link |
01:02:51.880
you have to have a sense of where their attention,
link |
01:02:54.780
actual attention in terms of their gaze is,
link |
01:02:57.840
but also there's this vision science,
link |
01:03:00.480
people talk about this all the time.
link |
01:03:01.600
Just because I'm looking at it
link |
01:03:02.800
doesn't mean I'm paying attention to it.
link |
01:03:04.680
So figuring out what is the person looking at?
link |
01:03:07.400
What is the sensory information they've taken in?
link |
01:03:09.840
And the theory of mind piece comes in is
link |
01:03:12.500
what are they actually attending to cognitively?
link |
01:03:16.480
And also what are they thinking about?
link |
01:03:19.000
Like what is the computation they're performing?
link |
01:03:21.200
And you have probably maybe a few options
link |
01:03:24.280
for the pedestrian crossing.
link |
01:03:28.280
It doesn't have to be,
link |
01:03:29.280
it's like a variable with a few discrete states,
link |
01:03:31.800
but you have to have a good estimation
link |
01:03:33.320
which of the states that brain is in
link |
01:03:35.520
for the pedestrian case.
link |
01:03:36.640
And the same is for attending with a robot.
link |
01:03:39.280
If you're collaborating to pick up an object,
link |
01:03:42.000
you have to figure out is the human,
link |
01:03:44.740
like there's a few discrete states
link |
01:03:47.640
that the human could be in.
link |
01:03:48.600
You have to predict that by observing the human.
link |
01:03:52.120
And that seems like a machine learning problem
link |
01:03:54.000
to figure out what's the human up to.
link |
01:03:59.280
It's not as simple as sort of planning
link |
01:04:02.160
just because they move their arm
link |
01:04:03.920
means the arm will continue moving in this direction.
link |
01:04:06.840
You have to really have a model
link |
01:04:08.560
of what they're thinking about
link |
01:04:09.880
and what's the motivation behind the movement of the arm.
link |
01:04:12.520
Here we are talking about relatively simple physical actions,
link |
01:04:16.560
but you can take that the higher levels also
link |
01:04:19.280
like to predict what the people are going to do,
link |
01:04:21.760
you need to know what their goals are.
link |
01:04:26.080
What are they trying to, are they exercising?
link |
01:04:27.980
Are they just starting to get somewhere?
link |
01:04:29.440
But even higher level, I mean,
link |
01:04:30.880
you are predicting what people will do in their career,
link |
01:04:33.920
what their life themes are.
link |
01:04:35.120
Do they want to be famous, rich, or do good?
link |
01:04:37.800
And that takes a lot more information,
link |
01:04:40.600
but it allows you to then predict their actions,
link |
01:04:43.380
what choices they might make.
link |
01:04:45.720
So how does evolution and computation apply
link |
01:04:49.200
to the world of neural networks?
link |
01:04:50.800
I've seen quite a bit of work from you and others
link |
01:04:53.440
in the world of neural evolution.
link |
01:04:55.520
So maybe first, can you say, what is this field?
link |
01:04:58.600
Yeah, neural evolution is a combination of neural networks
link |
01:05:02.880
and evolution computation in many different forms,
link |
01:05:05.460
but the early versions were simply using evolution
link |
01:05:11.840
as a way to construct a neural network
link |
01:05:13.920
instead of say, stochastic gradient descent
link |
01:05:17.200
or backpropagation.
link |
01:05:18.340
Because evolution can evolve these parameters,
link |
01:05:21.460
weight values in a neural network,
link |
01:05:22.980
just like any other string of numbers, you can do that.
link |
01:05:26.260
And that's useful because some cases you don't have
link |
01:05:29.700
those targets that you need to backpropagate from.
link |
01:05:33.780
And it might be an agent that's running a maze
link |
01:05:35.940
or a robot playing a game or something.
link |
01:05:38.780
You don't, again, you don't know what the right answers are,
link |
01:05:41.060
you don't have backprop,
link |
01:05:42.100
but this way you can still evolve a neural net.
link |
01:05:44.820
And neural networks are really good at these tasks,
link |
01:05:47.460
because they recognize patterns
link |
01:05:49.900
and they generalize, interpolate between known situations.
link |
01:05:53.860
So you want to have a neural network in such a task,
link |
01:05:56.380
even if you don't have a supervised targets.
link |
01:05:59.140
So that's a reason and that's a solution.
link |
01:06:01.180
And also more recently,
link |
01:06:02.580
now when we have all this deep learning literature,
link |
01:06:05.620
it turns out that we can use evolution
link |
01:06:07.500
to optimize many aspects of those designs.
link |
01:06:11.180
The deep learning architectures have become so complex
link |
01:06:14.980
that there's little hope for us little humans
link |
01:06:17.420
to understand their complexity
link |
01:06:18.780
and what actually makes a good design.
link |
01:06:21.380
And now we can use evolution to give that design for you.
link |
01:06:24.500
And it might mean optimizing hyperparameters,
link |
01:06:28.380
like the depth of layers and so on,
link |
01:06:30.660
or the topology of the network,
link |
01:06:33.340
how many layers, how they're connected,
link |
01:06:35.260
but also other aspects like what activation functions
link |
01:06:37.580
you use where in the network during the learning process,
link |
01:06:40.620
or what loss function you use,
link |
01:06:42.420
you could generalize that.
link |
01:06:43.740
You could generate that, even data augmentation,
link |
01:06:47.580
all the different aspects of the design
link |
01:06:49.940
of deep learning experiments could be optimized that way.
link |
01:06:53.740
So that's an interaction between two mechanisms.
link |
01:06:56.940
But there's also, when we get more into cognitive science
link |
01:07:00.780
and the topics that we've been talking about,
link |
01:07:02.540
you could have learning mechanisms
link |
01:07:04.300
at two level timescales.
link |
01:07:06.140
So you do have an evolution
link |
01:07:07.900
that gives you baby neural networks
link |
01:07:10.580
that then learn during their lifetime.
link |
01:07:12.860
And you have this interaction of two timescales.
link |
01:07:15.900
And I think that can potentially be really powerful.
link |
01:07:19.340
Now, in biology, we are not born with all our faculties.
link |
01:07:23.420
We have to learn, we have a developmental period.
link |
01:07:25.380
In humans, it's really long and most animals have something.
link |
01:07:29.300
And probably the reason is that evolution of DNA
link |
01:07:32.700
is not detailed enough or plentiful enough to describe them.
link |
01:07:36.660
We can describe how to set the brain up,
link |
01:07:38.780
but we can, evolution can decide on a starting point
link |
01:07:44.300
and then have a learning algorithm
link |
01:07:46.140
that will construct the final product.
link |
01:07:48.900
And this interaction of intelligent, well,
link |
01:07:54.140
evolution that has produced a good starting point
link |
01:07:56.660
for the specific purpose of learning from it
link |
01:07:59.740
with the interaction with the environment,
link |
01:08:02.220
that can be a really powerful mechanism
link |
01:08:03.660
for constructing brains and constructing behaviors.
link |
01:08:06.980
I like how you walk back from intelligence.
link |
01:08:10.060
So optimize starting point, maybe.
link |
01:08:12.380
Yeah, okay, there's a lot of fascinating things to ask here.
link |
01:08:18.540
And this is basically this dance between neural networks
link |
01:08:22.100
and evolution and computation
link |
01:08:23.420
could go into the category of automated machine learning
link |
01:08:26.260
to where you're optimizing,
link |
01:08:28.860
whether it's hyperparameters of the topology
link |
01:08:31.020
or hyperparameters taken broadly.
link |
01:08:34.420
But the topology thing is really interesting.
link |
01:08:36.380
I mean, that's not really done that effectively
link |
01:08:40.260
or throughout the history of machine learning
link |
01:08:41.900
has not been done.
link |
01:08:43.300
Usually there's a fixed architecture.
link |
01:08:45.020
Maybe there's a few components you're playing with,
link |
01:08:47.300
but to grow a neural network, essentially,
link |
01:08:50.140
the way you grow an organism is really fascinating space.
link |
01:08:52.940
How hard is it, do you think, to grow a neural network?
link |
01:08:58.060
And maybe what kind of neural networks
link |
01:09:00.860
are more amenable to this kind of idea than others?
link |
01:09:04.700
I've seen quite a bit of work on recurrent neural networks.
link |
01:09:06.980
Is there some architectures that are friendlier than others?
link |
01:09:10.940
And is this just a fun, small scale set of experiments
link |
01:09:15.300
or do you have hope that we can be able to grow
link |
01:09:18.780
powerful neural networks?
link |
01:09:20.300
I think we can.
link |
01:09:21.780
And most of the work up to now
link |
01:09:24.820
is taking architectures that already exist
link |
01:09:27.060
that humans have designed and try to optimize them further.
link |
01:09:30.900
And you can totally do that.
link |
01:09:32.860
A few years ago, we did an experiment.
link |
01:09:34.260
We took a winner of the image captioning competition
link |
01:09:39.260
and the architecture and just broke it into pieces
link |
01:09:42.620
and took the pieces.
link |
01:09:43.740
And that was our search base.
link |
01:09:45.500
See if you can do better.
link |
01:09:46.700
And we indeed could, 15% better performance
link |
01:09:49.300
by just searching around the network design
link |
01:09:52.740
that humans had come up with,
link |
01:09:53.980
Oreo vinyls and others.
link |
01:09:56.300
So, but that's starting from a point
link |
01:09:59.220
that humans have produced,
link |
01:10:00.820
but we could do something more general.
link |
01:10:03.500
It doesn't have to be that kind of network.
link |
01:10:05.820
The hard part is, there are a couple of challenges.
link |
01:10:08.820
One of them is to define the search base.
link |
01:10:10.740
What are your elements and how you put them together.
link |
01:10:14.620
And the space is just really, really big.
link |
01:10:18.900
So you have to somehow constrain it
link |
01:10:21.020
and have some hunch what will work
link |
01:10:23.340
because otherwise everything is possible.
link |
01:10:25.380
And another challenge is that in order to evaluate
link |
01:10:28.540
how good your design is, you have to train it.
link |
01:10:32.260
I mean, you have to actually try it out.
link |
01:10:34.980
And that's currently very expensive, right?
link |
01:10:37.260
I mean, deep learning networks may take days to train
link |
01:10:40.380
while imagine you having a population of a hundred
link |
01:10:42.260
and have to run it for a hundred generations.
link |
01:10:44.660
It's not yet quite feasible computationally.
link |
01:10:48.020
It will be, but also there's a large carbon footprint
link |
01:10:51.620
and all that.
link |
01:10:52.460
I mean, we are using a lot of computation for doing it.
link |
01:10:54.300
So intelligent methods and intelligent,
link |
01:10:57.540
I mean, we have to do some science
link |
01:11:00.580
in order to figure out what the right representations are
link |
01:11:03.580
and right operators are, and how do we evaluate them
link |
01:11:07.300
without having to fully train them.
link |
01:11:09.180
And that is where the current research is
link |
01:11:11.380
and we're making progress on all those fronts.
link |
01:11:14.460
So yes, there are certain architectures
link |
01:11:17.860
that are more amenable to that approach,
link |
01:11:20.940
but also I think we can create our own architecture
link |
01:11:23.580
and all representations that are even better at that.
link |
01:11:26.300
And do you think it's possible to do like a tiny baby network
link |
01:11:30.180
that grows into something that can do state of the art
link |
01:11:32.700
on like even the simple data set like MNIST,
link |
01:11:35.380
and just like it just grows into a gigantic monster
link |
01:11:39.900
that's the world's greatest handwriting recognition system?
link |
01:11:42.460
Yeah, there are approaches like that.
link |
01:11:44.340
Esteban Real and Cochlear for instance,
link |
01:11:45.980
I worked on evolving a smaller network
link |
01:11:48.500
and then systematically expanding it to a larger one.
link |
01:11:51.940
Your elements are already there and scaling it up
link |
01:11:54.980
will just give you more power.
link |
01:11:56.500
So again, evolution gives you that starting point
link |
01:11:59.340
and then there's a mechanism that gives you the final result
link |
01:12:02.820
and a very powerful approach.
link |
01:12:05.980
But you could also simulate the actual growth process.
link |
01:12:12.660
And like I said before, evolving a starting point
link |
01:12:15.340
and then evolving or training the network,
link |
01:12:18.420
there's not that much work that's been done on that yet.
link |
01:12:21.980
We need some kind of a simulation environment
link |
01:12:24.660
so the interactions at will,
link |
01:12:27.420
the supervised environment doesn't really,
link |
01:12:29.540
it's not as easily usable here.
link |
01:12:33.060
Sorry, the interaction between neural networks?
link |
01:12:35.580
Yeah, the neural networks that you're creating,
link |
01:12:37.300
interacting with the world
link |
01:12:39.020
and learning from these sequences of interactions,
link |
01:12:43.060
perhaps communication with others.
link |
01:12:46.900
That's awesome.
link |
01:12:47.740
We would like to get there,
link |
01:12:48.900
but just the task of simulating something
link |
01:12:51.620
is at that level is very hard.
link |
01:12:53.260
It's very difficult.
link |
01:12:54.100
I love the idea.
link |
01:12:55.420
I mean, one of the powerful things about evolution
link |
01:12:58.220
on Earth is the predators and prey emerged.
link |
01:13:01.300
And like there's just like,
link |
01:13:03.540
there's bigger fish and smaller fish
link |
01:13:05.340
and it's fascinating to think
link |
01:13:07.100
that you could have neural networks competing
link |
01:13:08.900
against each other in one neural network
link |
01:13:10.340
being able to destroy another one.
link |
01:13:12.260
There's like wars of neural networks competing
link |
01:13:14.860
to solve the MNIST problem, I don't know.
link |
01:13:16.820
Yeah, yeah.
link |
01:13:17.900
Oh, totally, yeah, yeah, yeah.
link |
01:13:19.260
And we actually simulated also that prey
link |
01:13:22.700
and it was interesting what happened there,
link |
01:13:25.220
Padmini Rajagopalan did this
link |
01:13:26.900
and Kay Holkamp was a zoologist.
link |
01:13:29.580
So we had, again,
link |
01:13:33.940
we had simulated hyenas, simulated zebras.
link |
01:13:37.420
Nice.
link |
01:13:38.260
And initially, the hyenas just tried to hunt them
link |
01:13:42.860
and when they actually stumbled upon the zebra,
link |
01:13:45.340
they ate it and were happy.
link |
01:13:47.700
And then the zebras learned to escape
link |
01:13:51.540
and the hyenas learned to team up.
link |
01:13:54.300
And actually two of them approached
link |
01:13:55.700
in different directions.
link |
01:13:56.900
And now the zebras, their next step,
link |
01:13:59.020
they generated a behavior where they split
link |
01:14:02.820
in different directions,
link |
01:14:03.900
just like actually gazelles do
link |
01:14:07.380
when they are being hunted.
link |
01:14:08.420
They confuse the predator
link |
01:14:09.620
by going in different directions.
link |
01:14:10.940
That emerged and then more hyenas joined
link |
01:14:14.380
and kind of circled them.
link |
01:14:16.540
And then when they circled them,
link |
01:14:18.820
they could actually herd the zebras together
link |
01:14:21.060
and eat multiple zebras.
link |
01:14:23.540
So there was like an arms race of predators and prey.
link |
01:14:28.340
And they gradually developed more complex behaviors,
link |
01:14:31.020
some of which we actually do see in nature.
link |
01:14:33.860
And this kind of coevolution,
link |
01:14:36.820
that's competitive coevolution,
link |
01:14:38.060
it's a fascinating topic
link |
01:14:39.580
because there's a promise or possibility
link |
01:14:42.900
that you will discover something new
link |
01:14:45.540
that you don't already know.
link |
01:14:46.460
You didn't build it in.
link |
01:14:48.100
It came from this arms race.
link |
01:14:50.700
It's hard to keep the arms race going.
link |
01:14:52.500
It's hard to have rich enough simulation
link |
01:14:55.300
that supports all of these complex behaviors.
link |
01:14:58.260
But at least for several steps,
link |
01:15:00.020
we've already seen it in this predator prey scenario, yeah.
link |
01:15:03.580
First of all, it's fascinating to think about this context
link |
01:15:06.260
in terms of evolving architectures.
link |
01:15:09.580
So I've studied Tesla autopilot for a long time.
link |
01:15:12.700
It's one particular implementation of an AI system
link |
01:15:17.540
that's operating in the real world.
link |
01:15:18.820
I find it fascinating because of the scale
link |
01:15:20.940
at which it's used out in the real world.
link |
01:15:23.340
And I'm not sure if you're familiar with that system much,
link |
01:15:26.220
but, you know, Andre Kapathy leads that team
link |
01:15:28.540
on the machine learning side.
link |
01:15:30.060
And there's a multitask network, multiheaded network,
link |
01:15:34.900
where there's a core, but it's trained on particular tasks.
link |
01:15:38.900
And there's a bunch of different heads
link |
01:15:40.260
that are trained on that.
link |
01:15:41.740
Is there some lessons from evolutionary computation
link |
01:15:46.260
or neuroevolution that could be applied
link |
01:15:48.340
to this kind of multiheaded beast
link |
01:15:50.940
that's operating in the real world?
link |
01:15:52.460
Yes, it's a very good problem for neuroevolution.
link |
01:15:56.580
And the reason is that when you have multiple tasks,
link |
01:16:00.660
they support each other.
link |
01:16:02.860
So let's say you're learning to classify X ray images
link |
01:16:08.020
to different pathologies.
link |
01:16:09.500
So you have one task is to classify this disease
link |
01:16:13.820
and another one, this disease, another one, this one.
link |
01:16:15.900
And when you're learning from one disease,
link |
01:16:19.300
that forces certain kinds of internal representations
link |
01:16:21.620
and embeddings, and they can serve
link |
01:16:24.820
as a helpful starting point for the other tasks.
link |
01:16:27.580
So you are combining the wisdom of multiple tasks
link |
01:16:30.940
into these representations.
link |
01:16:32.380
And it turns out that you can do better
link |
01:16:34.300
in each of these tasks
link |
01:16:35.860
when you are learning simultaneously other tasks
link |
01:16:38.060
than you would by one task alone.
link |
01:16:39.820
Which is a fascinating idea in itself, yeah.
link |
01:16:41.700
Yes, and people do that all the time.
link |
01:16:43.820
I mean, you use knowledge of domains that you know
link |
01:16:46.020
in new domains, and certainly neural network can do that.
link |
01:16:49.700
When neuroevolution comes in is that,
link |
01:16:52.300
what's the best way to combine these tasks?
link |
01:16:55.140
Now there's architectural design that allow you to decide
link |
01:16:58.140
where and how the embeddings,
link |
01:17:01.420
the internal representations are combined
link |
01:17:03.300
and how much you combine them.
link |
01:17:05.980
And there's quite a bit of research on that.
link |
01:17:08.020
And my team, Elliot Meyerson has worked on that
link |
01:17:11.380
in particular, like what is a good internal representation
link |
01:17:14.860
that supports multiple tasks?
link |
01:17:17.140
And we're getting to understand how that's constructed
link |
01:17:20.620
and what's in it, so that it is in a space
link |
01:17:24.100
that supports multiple different heads, like you said.
link |
01:17:28.260
And that I think is fundamentally
link |
01:17:31.780
how biological intelligence works as well.
link |
01:17:34.380
You don't build a representation just for one task.
link |
01:17:38.020
You try to build something that's general,
link |
01:17:40.100
not only so that you can do better in one task
link |
01:17:42.740
or multiple tasks, but also future tasks
link |
01:17:45.060
and future challenges.
link |
01:17:46.380
So you learn the structure of the world
link |
01:17:50.180
and that helps you in all kinds of future challenges.
link |
01:17:54.020
And so you're trying to design a representation
link |
01:17:56.100
that will support an arbitrary set of tasks
link |
01:17:58.420
in a particular sort of class of problem.
link |
01:18:01.020
Yeah, and also it turns out,
link |
01:18:03.100
and that's again, a surprise that Elliot found
link |
01:18:05.980
was that those tasks don't have to be very related.
link |
01:18:10.460
You know, you can learn to do better vision
link |
01:18:12.420
by learning language or better language
link |
01:18:15.340
by learning about DNA structure.
link |
01:18:17.900
No, somehow the world.
link |
01:18:20.020
Yeah, it rhymes.
link |
01:18:23.700
The world rhymes, even if it's very disparate fields.
link |
01:18:29.220
I mean, on that small topic, let me ask you,
link |
01:18:31.420
because you've also on the competition neuroscience side,
link |
01:18:36.260
you worked on both language and vision.
link |
01:18:41.340
What's the connection between the two?
link |
01:18:44.460
What's more, maybe there's a bunch of ways to ask this,
link |
01:18:46.900
but what's more difficult to build
link |
01:18:48.620
from an engineering perspective
link |
01:18:50.620
and evolutionary perspective,
link |
01:18:52.380
the human language system or the human vision system
link |
01:18:56.100
or the equivalent of in the AI space language and vision,
link |
01:19:00.620
or is it the best as the multitask idea
link |
01:19:03.660
that you're speaking to
link |
01:19:04.700
that they need to be deeply integrated?
link |
01:19:07.420
Yeah, absolutely the latter.
link |
01:19:09.980
Learning both at the same time,
link |
01:19:11.620
I think is a fascinating direction in the future.
link |
01:19:15.180
So we have data sets where there's visual component
link |
01:19:17.500
as well as verbal descriptions, for instance,
link |
01:19:20.020
and that way you can learn a deeper representation,
link |
01:19:22.740
a more useful representation for both.
link |
01:19:25.140
But it's still an interesting question
link |
01:19:26.620
of which one is easier.
link |
01:19:29.460
I mean, recognizing objects
link |
01:19:31.140
or even understanding sentences, that's relatively possible,
link |
01:19:35.780
but where it becomes, where the challenges are
link |
01:19:37.860
is to understand the world.
link |
01:19:39.820
Like the visual world, the 3D,
link |
01:19:42.300
what are the objects doing
link |
01:19:43.580
and predicting what will happen, the relationships.
link |
01:19:46.740
That's what makes vision difficult.
link |
01:19:48.180
And language, obviously it's what is being said,
link |
01:19:51.500
what the meaning is.
link |
01:19:52.700
And the meaning doesn't stop at who did what to whom.
link |
01:19:57.300
There are goals and plans and themes,
link |
01:19:59.740
and you eventually have to understand
link |
01:20:01.700
the entire human society and history
link |
01:20:04.700
in order to understand the sentence very much fully.
link |
01:20:07.580
There are plenty of examples of those kinds
link |
01:20:09.940
of short sentences when you bring in
link |
01:20:11.500
all the world knowledge to understand it.
link |
01:20:14.300
And that's the big challenge.
link |
01:20:15.900
Now we are far from that,
link |
01:20:17.300
but even just bringing in the visual world
link |
01:20:20.620
together with the sentence will give you already
link |
01:20:24.100
a lot deeper understanding of what's happening.
link |
01:20:26.860
And I think that that's where we're going very soon.
link |
01:20:29.700
I mean, we've had ImageNet for a long time,
link |
01:20:32.980
and now we have all these text collections,
link |
01:20:36.020
but having both together and then learning
link |
01:20:40.020
a semantic understanding of what is happening,
link |
01:20:42.740
I think that that will be the next step
link |
01:20:44.540
in the next few years.
link |
01:20:45.380
Yeah, you're starting to see that
link |
01:20:46.340
with all the work with Transformers,
link |
01:20:47.980
was the community, the AI community
link |
01:20:50.820
starting to dip their toe into this idea
link |
01:20:53.340
of having language models that are now doing stuff
link |
01:20:59.340
with images, with vision, and then connecting the two.
link |
01:21:03.940
I mean, right now it's like these little explorations
link |
01:21:05.900
we're literally dipping the toe in,
link |
01:21:07.780
but maybe at some point we'll just dive into the pool
link |
01:21:11.780
and it'll just be all seen as the same thing.
link |
01:21:13.860
I do still wonder what's more fundamental,
link |
01:21:16.860
whether vision is, whether we don't think
link |
01:21:21.380
about vision correctly.
link |
01:21:23.300
Maybe the fact, because we're humans
link |
01:21:24.700
and we see things as beautiful and so on,
link |
01:21:28.820
and because we have cameras that are taking pixels
link |
01:21:31.020
as a 2D image, that we don't sufficiently think
link |
01:21:35.820
about vision as language.
link |
01:21:38.820
Maybe Chomsky is right all along,
link |
01:21:41.700
that vision is fundamental to,
link |
01:21:43.820
sorry, that language is fundamental to everything,
link |
01:21:46.820
to even cognition, to even consciousness.
link |
01:21:49.340
The base layer is all language,
link |
01:21:51.420
not necessarily like English, but some weird
link |
01:21:54.940
abstract representation, linguistic representation.
link |
01:21:59.380
Yeah, well, earlier we talked about the social structures
link |
01:22:02.580
and that may be what's underlying the language,
link |
01:22:05.380
and that's the more fundamental part,
link |
01:22:06.700
and then language has been added on top of that.
link |
01:22:08.740
Language emerges from the social interaction.
link |
01:22:11.140
Yeah, that's a very good guess.
link |
01:22:13.900
We are visual animals, though.
link |
01:22:15.420
A lot of the brain is dedicated to vision,
link |
01:22:17.780
and also, when we think about various abstract concepts,
link |
01:22:22.740
we usually reduce that to vision and images,
link |
01:22:27.860
and that's, you know, we go to a whiteboard,
link |
01:22:29.740
you draw pictures of very abstract concepts.
link |
01:22:33.100
So we tend to resort to that quite a bit,
link |
01:22:35.860
and that's a fundamental representation.
link |
01:22:37.460
It's probably possible that it predated language even.
link |
01:22:41.740
I mean, animals, a lot of, they don't talk,
link |
01:22:43.900
but they certainly do have vision,
link |
01:22:45.820
and language is interesting development
link |
01:22:49.820
in from mastication, from eating.
link |
01:22:53.140
You develop an organ that actually can produce sound
link |
01:22:55.980
to manipulate them.
link |
01:22:58.140
Maybe that was an accident.
link |
01:22:59.220
Maybe that was something that was available
link |
01:23:00.900
and then allowed us to do the communication,
link |
01:23:05.020
or maybe it was gestures.
link |
01:23:06.820
Sign language could have been the original proto language.
link |
01:23:10.060
We don't quite know, but the language is more fundamental
link |
01:23:13.300
than the medium in which it's communicated,
link |
01:23:16.820
and I think that it comes from those representations.
link |
01:23:20.980
Now, in current world, they are so strongly integrated,
link |
01:23:26.100
it's really hard to say which one is fundamental.
link |
01:23:28.260
You look at the brain structures and even visual cortex,
link |
01:23:32.220
which is supposed to be very much just vision.
link |
01:23:34.580
Well, if you are thinking of semantic concepts,
link |
01:23:37.460
you're thinking of language, visual cortex lights up.
link |
01:23:40.940
It's still useful, even for language computations.
link |
01:23:44.500
So there are common structures underlying them.
link |
01:23:47.140
So utilize what you need.
link |
01:23:49.220
And when you are understanding a scene,
link |
01:23:51.460
you're understanding relationships.
link |
01:23:53.100
Well, that's not so far from understanding relationships
link |
01:23:55.340
between words and concepts.
link |
01:23:56.820
So I think that that's how they are integrated.
link |
01:23:59.100
Yeah, and there's dreams, and once we close our eyes,
link |
01:24:02.340
there's still a world in there somehow operating
link |
01:24:04.380
and somehow possibly the visual system somehow integrated
link |
01:24:08.460
into all of it.
link |
01:24:09.860
I tend to enjoy thinking about aliens
link |
01:24:12.940
and thinking about the sad thing to me
link |
01:24:17.340
about extraterrestrial intelligent life,
link |
01:24:21.020
that if it visited us here on Earth,
link |
01:24:24.780
or if we came on Mars or maybe another solar system,
link |
01:24:29.060
another galaxy one day,
link |
01:24:30.900
that us humans would not be able to detect it
link |
01:24:34.860
or communicate with it or appreciate,
link |
01:24:37.060
like it'd be right in front of our nose
link |
01:24:38.740
and we were too self obsessed to see it.
link |
01:24:43.340
Not self obsessed, but our tools,
link |
01:24:48.580
our frameworks of thinking would not detect it.
link |
01:24:52.500
As a good movie, Arrival and so on,
link |
01:24:55.060
where Stephen Wolfram and his son,
link |
01:24:56.700
I think were part of developing this alien language
link |
01:24:59.300
of how aliens would communicate with humans.
link |
01:25:01.540
Do you ever think about that kind of stuff
link |
01:25:02.900
where if humans and aliens would be able to communicate
link |
01:25:07.620
with each other, like if we met each other at some,
link |
01:25:11.420
okay, we could do SETI, which is communicating
link |
01:25:13.660
from across a very big distance,
link |
01:25:15.980
but also just us, if you did a podcast with an alien,
link |
01:25:22.140
do you think we'd be able to find a common language
link |
01:25:25.380
and a common methodology of communication?
link |
01:25:28.420
I think from a computational perspective,
link |
01:25:30.860
the way to ask that is you have very fundamentally
link |
01:25:33.380
different creatures, agents that are created,
link |
01:25:35.460
would they be able to find a common language?
link |
01:25:38.500
Yes, I do think about that.
link |
01:25:40.980
I mean, I think a lot of people who are in computing,
link |
01:25:42.980
they, and AI in particular, they got into it
link |
01:25:46.220
because they were fascinated with science fiction
link |
01:25:48.860
and all of these options.
link |
01:25:50.740
I mean, Star Trek generated all kinds of devices
link |
01:25:54.060
that we have now, they envisioned it first
link |
01:25:56.540
and it's a great motivator to think about things like that.
link |
01:26:00.700
And I, so one, and again, being a computational scientist
link |
01:26:06.340
and trying to build intelligent agents,
link |
01:26:10.260
what I would like to do is have a simulation
link |
01:26:13.500
where the agents actually evolve communication,
link |
01:26:17.380
not just communication, we've done that,
link |
01:26:18.860
people have done that many times,
link |
01:26:20.260
that they communicate, they signal and so on,
link |
01:26:22.860
but actually develop a language.
link |
01:26:24.940
And language means grammar, it means all these
link |
01:26:26.860
social structures and on top of that,
link |
01:26:28.540
grammatical structures.
link |
01:26:30.860
And we do it under various conditions
link |
01:26:35.020
and actually try to identify what conditions
link |
01:26:36.740
are necessary for it to come out.
link |
01:26:39.980
And then we can start asking that kind of questions.
link |
01:26:43.380
Are those languages that emerge
link |
01:26:45.380
in those different simulated environments,
link |
01:26:47.980
are they understandable to us?
link |
01:26:49.940
Can we somehow make a translation?
link |
01:26:52.700
We can make it a concrete question.
link |
01:26:55.180
So machine translation of evolved languages.
link |
01:26:58.980
And so like languages that evolve come up with,
link |
01:27:01.980
can we translate, like I have a Google translate
link |
01:27:04.940
for the evolved languages.
link |
01:27:07.140
Yes, and if we do that enough,
link |
01:27:09.740
we have perhaps an idea what an alien language
link |
01:27:14.060
might be like, the space of where those languages can be.
link |
01:27:17.180
Because we can set up their environment differently.
link |
01:27:19.940
It doesn't need to be gravity.
link |
01:27:22.020
You can have all kinds of, societies can be different.
link |
01:27:24.860
They may have no predators.
link |
01:27:26.300
They may have all, everybody's a predator.
link |
01:27:28.460
All kinds of situations.
link |
01:27:30.100
And then see what the space possibly is
link |
01:27:32.860
where those languages are and what the difficulties are.
link |
01:27:35.900
That'd be really good actually to do that
link |
01:27:37.660
before the aliens come here.
link |
01:27:39.460
Yes, it's good practice.
link |
01:27:41.820
On the similar connection,
link |
01:27:45.260
you can think of AI systems as aliens.
link |
01:27:48.220
Is there ways to evolve a communication scheme
link |
01:27:51.500
for, there's a field you can call it explainable AI,
link |
01:27:55.020
for AI systems to be able to communicate.
link |
01:27:58.940
So you evolve a bunch of agents,
link |
01:28:01.620
but for some of them to be able to talk to you also.
link |
01:28:05.420
So to evolve a way for agents to be able to communicate
link |
01:28:08.460
about their world to us humans.
link |
01:28:11.020
Do you think that there's possible mechanisms
link |
01:28:13.420
for doing that?
link |
01:28:14.740
We can certainly try.
link |
01:28:16.220
And if it's an evolution competition system,
link |
01:28:20.540
for instance, you reward those solutions
link |
01:28:22.580
that are actually functional.
link |
01:28:24.100
That communication makes sense.
link |
01:28:25.580
It allows us to together again, achieve common goals.
link |
01:28:29.420
I think that's possible.
link |
01:28:30.860
But even from that paper that you mentioned,
link |
01:28:35.100
the anecdotes, it's quite likely also
link |
01:28:37.820
that the agents learn to lie and fake
link |
01:28:43.540
and do all kinds of things like that.
link |
01:28:45.300
I mean, we see that in even very low level,
link |
01:28:47.660
like bacterial evolution.
link |
01:28:48.860
There are cheaters.
link |
01:28:51.740
And who's to say that what they say
link |
01:28:53.860
is actually what they think.
link |
01:28:56.620
But that's what I'm saying,
link |
01:28:57.620
that there would have to be some common goal
link |
01:29:00.860
so that we can evaluate whether that communication
link |
01:29:02.700
is at least useful.
link |
01:29:05.980
They may be saying things just to make us feel good
link |
01:29:08.980
or get us to do what we want,
link |
01:29:10.620
but they would not turn them off or something.
link |
01:29:12.380
But so we would have to understand
link |
01:29:15.100
their internal representations much better
link |
01:29:16.700
to really make sure that that translation is critical.
link |
01:29:20.100
But it can be useful.
link |
01:29:21.340
And I think it's possible to do that.
link |
01:29:23.940
There are examples where visualizations
link |
01:29:27.620
are automatically created
link |
01:29:29.940
so that we can look into the system
link |
01:29:33.540
and that language is not that far from it.
link |
01:29:35.820
I mean, it is a way of communicating and logging
link |
01:29:38.620
what you're doing in some interpretable way.
link |
01:29:43.140
I think a fascinating topic, yeah, to do that.
link |
01:29:45.380
Yeah, you're making me realize
link |
01:29:47.740
that it's a good scientific question
link |
01:29:51.060
whether lying is an effective mechanism
link |
01:29:54.460
for integrating yourself and succeeding
link |
01:29:56.220
in a social network, in a world that is social.
link |
01:30:00.380
I tend to believe that honesty and love
link |
01:30:04.540
are evolutionary advantages in an environment
link |
01:30:09.940
where there's a network of intelligent agents.
link |
01:30:12.620
But it's also very possible that dishonesty
link |
01:30:14.820
and manipulation and even violence,
link |
01:30:20.540
all those kinds of things might be more beneficial.
link |
01:30:23.100
That's the old open question about good versus evil.
link |
01:30:25.900
But I tend to, I mean, I don't know if it's a hopeful,
link |
01:30:29.220
maybe I'm delusional, but it feels like karma is a thing,
link |
01:30:35.100
which is like long term, the agents,
link |
01:30:39.540
they're just kind to others sometimes for no reason
link |
01:30:42.500
will do better.
link |
01:30:43.780
In a society that's not highly constrained on resources.
link |
01:30:48.380
So like people start getting weird
link |
01:30:49.940
and evil towards each other and bad
link |
01:30:51.860
when the resources are very low relative
link |
01:30:54.660
to the needs of the populace,
link |
01:30:56.940
especially at the basic level, like survival, shelter,
link |
01:31:01.100
food, all those kinds of things.
link |
01:31:02.660
But I tend to believe that once you have
link |
01:31:07.740
those things established, then, well, not to believe,
link |
01:31:11.500
I guess I hope that AI systems will be honest.
link |
01:31:14.900
But it's scary to think about the Turing test,
link |
01:31:19.980
AI systems that will eventually pass the Turing test
link |
01:31:23.940
will be ones that are exceptionally good at lying.
link |
01:31:26.740
That's a terrifying concept.
link |
01:31:29.540
I mean, I don't know.
link |
01:31:31.260
First of all, sort of from somebody who studied language
link |
01:31:34.220
and obviously are not just a world expert in AI,
link |
01:31:37.860
but somebody who dreams about the future of the field.
link |
01:31:41.540
Do you hope, do you think there'll be human level
link |
01:31:45.620
or superhuman level intelligences in the future
link |
01:31:48.700
that we eventually build?
link |
01:31:52.300
Well, I definitely hope that we can get there.
link |
01:31:56.180
One, I think important perspective
link |
01:31:59.260
is that we are building AI to help us.
link |
01:32:02.260
That it is a tool like cars or language
link |
01:32:06.580
or communication, AI will help us be more productive.
link |
01:32:13.700
And that is always a condition.
link |
01:32:17.580
It's not something that we build and let run
link |
01:32:20.340
and it becomes an entity of its own
link |
01:32:22.500
that doesn't care about us.
link |
01:32:25.180
Now, of course, really find the future,
link |
01:32:27.340
maybe that might be possible,
link |
01:32:28.780
but not in the foreseeable future when we are building it.
link |
01:32:32.220
And therefore we always in a position of limiting
link |
01:32:35.860
what it can or cannot do.
link |
01:32:38.860
And your point about lying is very interesting.
link |
01:32:45.900
Even in these hyenas societies, for instance,
link |
01:32:49.380
when a number of these hyenas band together
link |
01:32:52.700
and they take a risk and steal the kill,
link |
01:32:56.300
there are always hyenas that hang back
link |
01:32:58.620
and don't participate in that risky behavior,
link |
01:33:02.100
but they walk in later and join the party
link |
01:33:05.220
after the kill.
link |
01:33:06.940
And there are even some that may be ineffective
link |
01:33:10.020
and cause others to have harm.
link |
01:33:12.900
So, and like I said, even bacteria cheat.
link |
01:33:15.460
And we see it in biology,
link |
01:33:17.340
there's always some element on opportunity.
link |
01:33:20.540
If you have a society, I think that is just because
link |
01:33:22.700
if you have a society,
link |
01:33:24.180
in order for society to be effective,
link |
01:33:26.020
you have to have this cooperation
link |
01:33:27.580
and you have to have trust.
link |
01:33:29.900
And if you have enough of agents
link |
01:33:32.100
who are able to trust each other,
link |
01:33:33.980
you can achieve a lot more.
link |
01:33:36.580
But if you have trust,
link |
01:33:37.500
you also have opportunity for cheaters and liars.
link |
01:33:40.620
And I don't think that's ever gonna go away.
link |
01:33:43.620
There will be hopefully a minority
link |
01:33:45.220
so that they don't get in the way.
link |
01:33:46.660
And we studied in these hyena simulations,
link |
01:33:48.740
like what the proportion needs to be
link |
01:33:50.500
before it is no longer functional.
link |
01:33:52.660
And you can point out that you can tolerate
link |
01:33:55.060
a few cheaters and a few liars
link |
01:33:57.260
and the society can still function.
link |
01:33:59.660
And that's probably going to happen
link |
01:34:02.300
when we build these systems at Autonomously Learn.
link |
01:34:07.100
The really successful ones are honest
link |
01:34:09.260
because that's the best way of getting things done.
link |
01:34:13.100
But there probably are also intelligent agents
link |
01:34:15.900
that find that they can achieve their goals
link |
01:34:17.940
by bending the rules or cheating.
link |
01:34:20.860
So that could be a huge benefit
link |
01:34:23.780
as opposed to having fixed AI systems.
link |
01:34:25.620
Say we build an AGI system and deploying millions of them,
link |
01:34:29.980
it'd be that are exactly the same.
link |
01:34:33.500
There might be a huge benefit to introducing
link |
01:34:37.100
sort of from like an evolution computation perspective,
link |
01:34:39.620
a lot of variation.
link |
01:34:41.340
Sort of like diversity in all its forms is beneficial
link |
01:34:46.540
even if some people are assholes
link |
01:34:48.420
or some robots are assholes.
link |
01:34:49.980
So like it's beneficial to have that
link |
01:34:51.980
because you can't always a priori know
link |
01:34:56.780
what's good, what's bad.
link |
01:34:58.500
But that's a fascinating.
link |
01:35:01.380
Absolutely.
link |
01:35:02.300
Diversity is the bread and butter.
link |
01:35:04.380
I mean, if you're running an evolution,
link |
01:35:05.820
you see diversity is the one fundamental thing
link |
01:35:08.100
you have to have.
link |
01:35:09.100
And absolutely, also, it's not always good diversity.
link |
01:35:12.660
It may be something that can be destructive.
link |
01:35:14.980
We had in these hyenas simulations,
link |
01:35:16.380
we have hyenas that just are suicidal.
link |
01:35:19.220
They just run and get killed.
link |
01:35:20.580
But they form the basis of those
link |
01:35:22.820
who actually are really fast,
link |
01:35:24.460
but stop before they get killed
link |
01:35:26.060
and eventually turn into this mob.
link |
01:35:28.380
So there might be something useful there
link |
01:35:30.020
if it's recombined with something else.
link |
01:35:32.180
So I think that as long as we can tolerate some of that,
link |
01:35:34.980
it may turn into something better.
link |
01:35:36.860
You may change the rules
link |
01:35:38.500
because it's so much more efficient to do something
link |
01:35:40.660
that was actually against the rules before.
link |
01:35:43.300
And we've seen society change over time
link |
01:35:46.500
quite a bit along those lines.
link |
01:35:47.780
That there were rules in society
link |
01:35:49.940
that we don't believe are fair anymore,
link |
01:35:52.180
even though they were considered proper behavior before.
link |
01:35:57.180
So things are changing.
link |
01:35:58.540
And I think that in that sense,
link |
01:35:59.780
I think it's a good idea to be able to tolerate
link |
01:36:03.100
some of that cheating
link |
01:36:04.820
because eventually we might turn into something better.
link |
01:36:07.220
So yeah, I think this is a message
link |
01:36:08.940
to the trolls and the assholes of the internet
link |
01:36:11.140
that you too have a beautiful purpose
link |
01:36:13.220
in this human ecosystem.
link |
01:36:15.380
So I appreciate you very much.
link |
01:36:16.660
In moderate quantities, yeah.
link |
01:36:18.300
In moderate quantities.
link |
01:36:20.100
So there's a whole field of artificial life.
link |
01:36:22.820
I don't know if you're connected to this field,
link |
01:36:24.580
if you pay attention.
link |
01:36:26.340
Is, do you think about this kind of thing?
link |
01:36:29.580
Is there impressive demonstration to you
link |
01:36:32.260
of artificial life?
link |
01:36:33.140
Do you think of the agency you work with
link |
01:36:35.300
in the evolutionary computation perspective as life?
link |
01:36:41.140
And where do you think this is headed?
link |
01:36:43.620
Like, is there interesting systems
link |
01:36:45.100
that we'll be creating more and more
link |
01:36:47.060
that make us redefine, maybe rethink
link |
01:36:50.740
about the nature of life?
link |
01:36:52.420
Different levels of definition and goals there.
link |
01:36:55.780
I mean, at some level, artificial life
link |
01:36:58.620
can be considered multiagent systems
link |
01:37:01.300
that build a society that again, achieves a goal.
link |
01:37:04.100
And it might be robots that go into a building
link |
01:37:06.020
and clean it up or after an earthquake or something.
link |
01:37:09.380
You can think of that as an artificial life problem
link |
01:37:11.980
in some sense.
link |
01:37:13.620
Or you can really think of it, artificial life,
link |
01:37:15.860
as a simulation of life and a tool to understand
link |
01:37:20.860
what life is and how life evolved on earth.
link |
01:37:24.660
And like I said, in artificial life conference,
link |
01:37:26.820
there are branches of that conference sessions
link |
01:37:29.780
of people who really worry about molecular designs
link |
01:37:33.460
and the start of life, like I said,
link |
01:37:36.020
primordial soup where eventually
link |
01:37:37.860
you get something self replicating.
link |
01:37:39.740
And they're really trying to build that.
link |
01:37:41.980
So it's a whole range of topics.
link |
01:37:46.500
And I think that artificial life is a great tool
link |
01:37:50.820
to understand life.
link |
01:37:53.020
And there are questions like sustainability,
link |
01:37:56.420
species, we're losing species.
link |
01:37:59.300
How bad is it?
link |
01:38:00.860
Is it natural?
link |
01:38:02.540
Is there a tipping point?
link |
01:38:05.260
And where are we going?
link |
01:38:06.500
I mean, like the hyena evolution,
link |
01:38:08.100
we may have understood that there's a pivotal point
link |
01:38:11.380
in their evolution.
link |
01:38:12.220
They discovered cooperation and coordination.
link |
01:38:16.220
Artificial life simulations can identify that
link |
01:38:18.700
and maybe encourage things like that.
link |
01:38:22.900
And also societies can be seen as a form of life itself.
link |
01:38:28.020
I mean, we're not talking about biological evolution,
link |
01:38:30.380
evolution of societies.
link |
01:38:31.940
Maybe some of the same phenomena emerge in that domain
link |
01:38:36.540
and having artificial life simulations and understanding
link |
01:38:40.100
could help us build better societies.
link |
01:38:42.540
Yeah, and thinking from a meme perspective
link |
01:38:45.780
of from Richard Dawkins,
link |
01:38:50.860
that maybe the organisms, ideas of the organisms,
link |
01:38:54.060
not the humans in these societies that from,
link |
01:38:58.460
it's almost like reframing what is exactly evolving.
link |
01:39:01.900
Maybe the interesting,
link |
01:39:02.940
the humans aren't the interesting thing
link |
01:39:04.540
as the contents of our minds is the interesting thing.
link |
01:39:07.340
And that's what's multiplying.
link |
01:39:09.220
And that's actually multiplying and evolving
link |
01:39:10.860
in a much faster timescale.
link |
01:39:13.020
And that maybe has more power on the trajectory
link |
01:39:16.220
of life on earth than does biological evolution
link |
01:39:19.500
is the evolution of these ideas.
link |
01:39:20.940
Yes, and it's fascinating, like I said before,
link |
01:39:23.820
that we can keep up somehow biologically.
link |
01:39:27.500
We evolved to a point where we can keep up
link |
01:39:30.060
with this meme evolution, literature, internet.
link |
01:39:35.180
We understand DNA and we understand fundamental particles.
link |
01:39:38.980
We didn't start that way a thousand years ago.
link |
01:39:41.260
And we haven't evolved biologically very much,
link |
01:39:43.300
but somehow our minds are able to extend.
link |
01:39:46.980
And therefore AI can be seen also as one such step
link |
01:39:51.220
that we created and it's our tool.
link |
01:39:53.420
And it's part of that meme evolution that we created,
link |
01:39:56.340
even if our biological evolution does not progress as fast.
link |
01:39:59.620
And us humans might only be able to understand so much.
link |
01:40:03.700
We're keeping up so far,
link |
01:40:05.780
or we think we're keeping up so far,
link |
01:40:07.300
but we might need AI systems to understand.
link |
01:40:09.500
Maybe like the physics of the universe is operating,
link |
01:40:13.780
look at strength theory.
link |
01:40:14.740
Maybe it's operating in much higher dimensions.
link |
01:40:17.420
Maybe we're totally, because of our cognitive limitations,
link |
01:40:21.220
are not able to truly internalize the way this world works.
link |
01:40:25.740
And so we're running up against the limitation
link |
01:40:28.900
of our own minds.
link |
01:40:30.220
And we have to create these next level organisms
link |
01:40:33.100
like AI systems that would be able to understand much deeper,
link |
01:40:36.300
like really understand what it means to live
link |
01:40:38.460
in a multi dimensional world
link |
01:40:41.220
that's outside of the four dimensions,
link |
01:40:42.580
the three of space and one of time.
link |
01:40:45.340
Translation, and generally we can deal with the world,
link |
01:40:48.100
even if you don't understand all the details,
link |
01:40:49.620
we can use computers, even though we don't,
link |
01:40:52.020
most of us don't know all the structure
link |
01:40:54.380
that's underneath or drive a car.
link |
01:40:55.740
I mean, there are many components,
link |
01:40:57.220
especially new cars that you don't quite fully know,
link |
01:40:59.820
but you have the interface, you have an abstraction of it
link |
01:41:02.620
that allows you to operate it and utilize it.
link |
01:41:05.020
And I think that that's perfectly adequate
link |
01:41:08.140
and we can build on it.
link |
01:41:09.180
And AI can play a similar role.
link |
01:41:13.580
I have to ask about beautiful artificial life systems
link |
01:41:18.060
or evolutionary computation systems.
link |
01:41:20.900
Cellular automata to me,
link |
01:41:23.860
I remember it was a game changer for me early on in life
link |
01:41:26.580
when I saw Conway's Game of Life
link |
01:41:28.780
who recently passed away, unfortunately.
link |
01:41:31.380
And it's beautiful
link |
01:41:36.540
how much complexity can emerge from such simple rules.
link |
01:41:40.020
I just don't, somehow that simplicity
link |
01:41:44.420
is such a powerful illustration
link |
01:41:47.340
and also humbling because it feels like I personally,
link |
01:41:50.060
from my perspective,
link |
01:41:50.900
understand almost nothing about this world
link |
01:41:54.900
because like my intuition fails completely
link |
01:41:58.420
how complexity can emerge from such simplicity.
link |
01:42:01.260
Like my intuition fails, I think,
link |
01:42:02.660
is the biggest problem I have.
link |
01:42:05.980
Do you find systems like that beautiful?
link |
01:42:08.500
Is there, do you think about cellular automata?
link |
01:42:11.380
Because cellular automata don't really have,
link |
01:42:15.260
and many other artificial life systems
link |
01:42:17.140
don't necessarily have an objective.
link |
01:42:18.900
Maybe that's a wrong way to say it.
link |
01:42:21.620
It's almost like it's just evolving and creating.
link |
01:42:28.140
And there's not even a good definition
link |
01:42:29.700
of what it means to create something complex
link |
01:42:33.020
and interesting and surprising,
link |
01:42:34.540
all those words that you said.
link |
01:42:37.540
Is there some of those systems that you find beautiful?
link |
01:42:41.060
Yeah, yeah.
link |
01:42:41.900
And similarly, evolution does not have a goal.
link |
01:42:45.340
It is responding to current situation
link |
01:42:49.500
and survival then creates more complexity
link |
01:42:52.700
and therefore we have something that we perceive as progress
link |
01:42:56.060
but that's not what evolution is inherently set to do.
link |
01:43:00.620
And yeah, that's really fascinating
link |
01:43:03.220
how a simple set of rules or simple mappings can,
link |
01:43:10.180
how from such simple mappings, complexity can emerge.
link |
01:43:14.460
So it's a question of emergence and self organization.
link |
01:43:17.620
And the game of life is one of the simplest ones
link |
01:43:21.420
and very visual and therefore it drives home the point
link |
01:43:25.580
that it's possible that nonlinear interactions
link |
01:43:29.580
and this kind of complexity can emerge from them.
link |
01:43:34.660
And biology and evolution is along the same lines.
link |
01:43:37.860
We have simple representations.
link |
01:43:40.020
DNA, if you really think of it, it's not that complex.
link |
01:43:44.140
It's a long sequence of them, there's lots of them
link |
01:43:46.140
but it's a very simple representation.
link |
01:43:48.140
And similarly with evolutionary computation,
link |
01:43:49.820
whatever string or tree representation we have
link |
01:43:52.580
and the operations, the amount of code that's required
link |
01:43:57.540
to manipulate those, it's really, really little.
link |
01:44:00.460
And of course, game of life even less.
link |
01:44:02.420
So how complexity emerges from such simple principles,
link |
01:44:06.140
that's absolutely fascinating.
link |
01:44:09.100
The challenge is to be able to control it
link |
01:44:11.420
and guide it and direct it so that it becomes useful.
link |
01:44:15.500
And like game of life is fascinating to look at
link |
01:44:17.900
and evolution, all the forms that come out is fascinating
link |
01:44:21.140
but can we actually make it useful for us?
link |
01:44:24.020
And efficient because if you actually think about
link |
01:44:26.980
each of the cells in the game of life as a living organism,
link |
01:44:30.260
there's a lot of death that has to happen
link |
01:44:32.540
to create anything interesting.
link |
01:44:34.300
And so I guess the question is for us humans
link |
01:44:36.460
that are mortal and then life ends quickly,
link |
01:44:38.860
we wanna kinda hurry up and make sure we take evolution,
link |
01:44:44.940
the trajectory that is a little bit more efficient
link |
01:44:47.380
than the alternatives.
link |
01:44:49.300
And that touches upon something we talked about earlier
link |
01:44:51.220
that evolution competition is very impatient.
link |
01:44:54.580
We have a goal, we want it right away
link |
01:44:57.140
whereas this biology has a lot of time and deep time
link |
01:45:01.020
and weak pressure and large populations.
link |
01:45:04.460
One great example of this is the novelty search.
link |
01:45:08.900
So evolutionary computation
link |
01:45:11.020
where you don't actually specify a fitness goal,
link |
01:45:14.820
something that is your actual thing that you want
link |
01:45:17.300
but you just reward solutions that are different
link |
01:45:20.860
from what you've seen before, nothing else.
link |
01:45:23.700
And you know what?
link |
01:45:25.060
You actually discover things
link |
01:45:26.540
that are interesting and useful that way.
link |
01:45:29.220
Ken Stanley and Joel Lehmann did this one study
link |
01:45:31.020
where they actually tried to evolve walking behavior
link |
01:45:34.380
on robots.
link |
01:45:35.260
And that's actually, we talked about earlier
link |
01:45:36.540
where your robot actually failed in all kinds of ways
link |
01:45:39.580
and eventually discovered something
link |
01:45:40.940
that was a very efficient walk.
link |
01:45:43.820
And it was because they rewarded things that were different
link |
01:45:48.740
that you were able to discover something.
link |
01:45:50.660
And I think that this is crucial
link |
01:45:52.900
because in order to be really different
link |
01:45:55.020
from what you already have,
link |
01:45:56.540
you have to utilize what is there in a domain
link |
01:45:59.020
to create something really different.
link |
01:46:00.700
So you have encoded the fundamentals of your world
link |
01:46:05.700
and then you make changes to those fundamentals
link |
01:46:08.020
you get further away.
link |
01:46:09.660
So that's probably what's happening
link |
01:46:11.460
in these systems of emergence.
link |
01:46:14.220
That the fundamentals are there.
link |
01:46:17.300
And when you follow those fundamentals
link |
01:46:18.940
you get into points
link |
01:46:20.020
and some of those are actually interesting and useful.
link |
01:46:22.820
Now, even in that robotic Walker simulation
link |
01:46:25.140
there was a large set of garbage,
link |
01:46:28.300
but among them, there were some of these gems.
link |
01:46:31.780
And then those are the ones
link |
01:46:32.740
that somehow you have to outside recognize and make useful.
link |
01:46:36.540
But this kind of productive systems
link |
01:46:38.620
if you code them the right kind of principles
link |
01:46:41.540
I think that encode the structure of the domain
link |
01:46:45.580
then you will get to these solutions and discoveries.
link |
01:46:49.980
It feels like that might also be a good way to live life.
link |
01:46:52.740
So let me ask, do you have advice for young people today
link |
01:46:58.060
about how to live life or how to succeed in their career
link |
01:47:01.460
or forget career, just succeed in life
link |
01:47:04.580
from an evolution and computation perspective?
link |
01:47:08.700
Yes, yes, definitely.
link |
01:47:11.460
Explore, diversity, exploration and individuals
link |
01:47:17.780
take classes in music, history, philosophy,
link |
01:47:22.100
math, engineering, see connections between them,
link |
01:47:27.380
travel, learn a language.
link |
01:47:30.020
I mean, all this diversity is fascinating
link |
01:47:32.060
and we have it at our fingertips today.
link |
01:47:35.380
It's possible, you have to make a bit of an effort
link |
01:47:37.740
because it's not easy, but the rewards are wonderful.
link |
01:47:42.780
Yeah, there's something interesting
link |
01:47:43.740
about an objective function of new experiences.
link |
01:47:47.300
So try to figure out, I mean,
link |
01:47:51.100
what is the maximally new experience I could have today?
link |
01:47:56.700
And that sort of that novelty, optimizing for novelty
link |
01:47:59.300
for some period of time might be very interesting way
link |
01:48:01.780
to sort of maximally expand the sets of experiences you had
link |
01:48:06.940
and then ground from that perspective,
link |
01:48:11.620
like what will be the most fulfilling trajectory
link |
01:48:14.460
through life.
link |
01:48:15.300
Of course, the flip side of that is where I come from.
link |
01:48:19.140
Again, maybe Russian, I don't know.
link |
01:48:20.940
But the choice has a detrimental effect, I think,
link |
01:48:25.940
at least from my mind where scarcity has an empowering effect.
link |
01:48:31.300
So if I sort of, if I have very little of something
link |
01:48:37.300
and only one of that something, I will appreciate it deeply
link |
01:48:40.980
until I came to Texas recently
link |
01:48:44.540
and I've been pigging out on delicious, incredible meat.
link |
01:48:47.620
I've been fasting a lot, so I need to do that again.
link |
01:48:49.860
But when you fast for a few days,
link |
01:48:52.220
that the first taste of a food is incredible.
link |
01:48:56.580
So the downside of exploration is that somehow,
link |
01:49:05.660
maybe you can correct me,
link |
01:49:06.980
but somehow you don't get to experience deeply
link |
01:49:11.140
any one of the particular moments,
link |
01:49:13.420
but that could be a psychology thing.
link |
01:49:15.620
That could be just a very human peculiar,
link |
01:49:18.660
flaw.
link |
01:49:23.660
Yeah, I didn't mean that you superficially explore.
link |
01:49:26.740
I mean, you can.
link |
01:49:27.580
Explore deeply.
link |
01:49:28.420
Yeah, so you don't have to explore 100 things,
link |
01:49:31.100
but maybe a few topics
link |
01:49:33.100
where you can take a deep enough dive
link |
01:49:36.500
that you gain an understanding.
link |
01:49:39.980
You yourself have to decide at some point
link |
01:49:42.620
that this is deep enough.
link |
01:49:44.380
And I obtained what I can from this topic
link |
01:49:49.220
and now it's time to move on.
link |
01:49:51.340
And that might take years.
link |
01:49:53.980
People sometimes switch careers
link |
01:49:56.220
and they may stay on some career for a decade
link |
01:49:59.100
and switch to another one.
link |
01:50:00.460
You can do it.
link |
01:50:01.780
You're not pretty determined to stay where you are,
link |
01:50:04.620
but in order to achieve something,
link |
01:50:09.060
10,000 hours makes,
link |
01:50:10.460
you need 10,000 hours to become an expert on something.
link |
01:50:13.580
So you don't have to become an expert,
link |
01:50:15.300
but they even develop an understanding
link |
01:50:17.100
and gain the experience that you can use later.
link |
01:50:19.260
You probably have to spend, like I said, it's not easy.
link |
01:50:21.860
You've got to spend some effort on it.
link |
01:50:24.340
Now, also at some point then,
link |
01:50:26.220
when you have this diversity
link |
01:50:28.060
and you have these experiences, exploration,
link |
01:50:30.260
you may want to,
link |
01:50:32.740
you may find something that you can't stay away from.
link |
01:50:35.820
Like for us, it was computers, it was AI.
link |
01:50:38.660
It was, you know, that I just have to do it.
link |
01:50:41.980
And I, you know, and then it will take decades maybe
link |
01:50:45.220
and you are pursuing it
link |
01:50:46.540
because you figured out that this is really exciting
link |
01:50:49.300
and you can bring in your experiences.
link |
01:50:51.260
And there's nothing wrong with that either,
link |
01:50:52.740
but you asked what's the advice for young people.
link |
01:50:55.860
That's the exploration part.
link |
01:50:57.500
And then beyond that, after that exploration,
link |
01:51:00.140
you actually can focus and build a career.
link |
01:51:03.220
And, you know, even there you can switch multiple times,
link |
01:51:05.820
but I think that diversity exploration is fundamental
link |
01:51:09.140
to having a successful career as is concentration
link |
01:51:13.340
and spending an effort where it matters.
link |
01:51:15.540
And, but you are in better position to make the choice
link |
01:51:18.980
when you have done your homework.
link |
01:51:20.380
Explored.
link |
01:51:21.220
So exploration precedes commitment, but both are beautiful.
link |
01:51:24.900
Yeah.
link |
01:51:26.140
So again, from an evolutionary computation perspective,
link |
01:51:29.460
we'll look at all the agents that had to die
link |
01:51:32.460
in order to come up with different solutions in simulation.
link |
01:51:35.740
What do you think from that individual agent's perspective
link |
01:51:40.260
is the meaning of it all?
link |
01:51:41.820
So far as humans, you're just one agent
link |
01:51:43.820
who's going to be dead, unfortunately, one day too soon.
link |
01:51:48.740
What do you think is the why
link |
01:51:51.860
of why that agent came to be
link |
01:51:55.180
and eventually will be no more?
link |
01:51:58.540
Is there a meaning to it all?
link |
01:52:00.060
Yeah.
link |
01:52:00.900
In evolution, there is meaning.
link |
01:52:02.460
Everything is a potential direction.
link |
01:52:05.620
Everything is a potential stepping stone.
link |
01:52:09.540
Not all of them are going to work out.
link |
01:52:11.380
Some of them are foundations for further improvement.
link |
01:52:16.860
And even those that are perhaps going to die out
link |
01:52:21.100
were potential energies, potential solutions.
link |
01:52:25.580
In biology, we see a lot of species die off naturally.
link |
01:52:28.700
And you know, like the dinosaurs,
link |
01:52:29.860
I mean, they were really good solution for a while,
link |
01:52:31.860
but then it didn't turned out to be
link |
01:52:33.980
not such a good solution in the long term.
link |
01:52:37.780
When there's an environmental change,
link |
01:52:39.420
you have to have diversity.
link |
01:52:40.660
Some other solutions become better.
link |
01:52:42.660
Doesn't mean that that was an attempt.
link |
01:52:45.020
It didn't quite work out or last,
link |
01:52:47.540
but there are still dinosaurs among us,
link |
01:52:49.380
at least their relatives.
link |
01:52:51.220
And they may one day again be useful, who knows?
link |
01:52:55.580
So from an individual's perspective,
link |
01:52:57.220
you got to think of a bigger picture
link |
01:52:59.100
that it is a huge engine that is innovative.
link |
01:53:04.420
And these elements are all part of it,
link |
01:53:06.780
potential innovations on their own.
link |
01:53:09.380
And also as raw material perhaps,
link |
01:53:12.340
or stepping stones for other things that could come after.
link |
01:53:16.380
But it still feels from an individual perspective
link |
01:53:18.740
that I matter a lot.
link |
01:53:21.100
But even if I'm just a little cog in a giant machine,
link |
01:53:24.500
is that just a silly human notion
link |
01:53:28.140
in an individualistic society, no, she'll let go of that?
link |
01:53:32.780
Do you find beauty in being part of the giant machine?
link |
01:53:36.700
Yeah, I think it's meaningful.
link |
01:53:38.980
I think it adds purpose to your life
link |
01:53:41.500
that you are part of something bigger.
link |
01:53:45.340
That said, do you ponder your individual agent's mortality?
link |
01:53:51.780
Do you think about death?
link |
01:53:53.700
Do you fear death?
link |
01:53:56.660
Well, certainly more now than when I was a youngster
link |
01:54:00.620
and did skydiving and paragliding and all these things.
link |
01:54:05.580
You've become wiser.
link |
01:54:09.020
There is a reason for this life arc
link |
01:54:13.900
that younger folks are more fearless in many ways.
link |
01:54:17.100
That's part of the exploration.
link |
01:54:20.660
They are the individuals who think,
link |
01:54:22.100
hmm, I wonder what's over those mountains
link |
01:54:24.780
or what if I go really far in that ocean?
link |
01:54:27.020
What would I find?
link |
01:54:27.940
I mean, older folks don't necessarily think that way,
link |
01:54:32.140
but younger do and it's kind of counterintuitive.
link |
01:54:34.820
So yeah, but logically it's like,
link |
01:54:39.100
you have a limited amount of time,
link |
01:54:40.060
what can you do with it that matters?
link |
01:54:42.420
So you try to, you have done your exploration,
link |
01:54:45.300
you committed to a certain direction
link |
01:54:48.100
and you become an expert perhaps in it.
link |
01:54:50.340
What can I do that matters
link |
01:54:52.460
with the limited resources that I have?
link |
01:54:55.500
That's how I think a lot of people, myself included,
link |
01:54:59.700
start thinking later on in their career.
link |
01:55:02.380
And like you said, leave a bit of a trace
link |
01:55:05.540
and a bit of an impact even though after the agent is gone.
link |
01:55:08.460
Yeah, that's the goal.
link |
01:55:11.180
Well, this was a fascinating conversation.
link |
01:55:13.580
I don't think there's a better way to end it.
link |
01:55:15.860
Thank you so much.
link |
01:55:16.980
So first of all, I'm very inspired
link |
01:55:19.380
of how vibrant the community at UT Austin and Austin is.
link |
01:55:22.900
It's really exciting for me to see it.
link |
01:55:25.500
And this whole field seems like profound philosophically,
link |
01:55:29.900
but also the path forward
link |
01:55:31.220
for the artificial intelligence community.
link |
01:55:33.260
So thank you so much for explaining
link |
01:55:35.300
so many cool things to me today
link |
01:55:36.780
and for wasting all of your valuable time with me.
link |
01:55:39.140
Oh, it was a pleasure.
link |
01:55:40.340
Thanks.
link |
01:55:41.180
I appreciate it.
link |
01:55:42.740
Thanks for listening to this conversation
link |
01:55:44.420
with Risto McAlignan.
link |
01:55:45.860
And thank you to the Jordan Harbinger Show,
link |
01:55:48.620
Grammarly, Belcampo, and Indeed.
link |
01:55:51.940
Check them out in the description to support this podcast.
link |
01:55:55.500
And now let me leave you with some words from Carl Sagan.
link |
01:55:59.300
Extinction is the rule.
link |
01:56:01.700
Survival is the exception.
link |
01:56:04.860
Thank you for listening.
link |
01:56:05.980
I hope to see you next time.