back to index

Gary Marcus: Toward a Hybrid of Deep Learning and Symbolic AI | Lex Fridman Podcast #43


small model | large model

link |
00:00:00.000
The following is a conversation with Gary Marcus.
link |
00:00:02.760
He's a professor emeritus at NYU, founder of robust AI
link |
00:00:06.480
and geometric intelligence.
link |
00:00:08.200
The latter is a machine learning company
link |
00:00:10.320
that was acquired by Uber in 2016.
link |
00:00:13.520
He's the author of several books on natural
link |
00:00:16.480
and artificial intelligence,
link |
00:00:18.160
including his new book, Rebooting AI,
link |
00:00:20.840
Building Machines We Can Trust.
link |
00:00:23.360
Gary has been a critical voice highlighting the limits
link |
00:00:26.480
of deep learning and AI in general
link |
00:00:28.800
and discussing the challenges before our AI community
link |
00:00:33.720
that must be solved in order to achieve
link |
00:00:35.760
artificial general intelligence.
link |
00:00:38.320
As I'm having these conversations,
link |
00:00:40.120
I try to find paths toward insight, towards new ideas.
link |
00:00:43.600
I try to have no ego in the process and gets in the way.
link |
00:00:47.640
I'll often continuously try on several hats, several roles.
link |
00:00:52.280
One, for example, is the role of a three year old
link |
00:00:54.720
who understands very little about anything
link |
00:00:57.120
and asks big what and why questions.
link |
00:01:00.360
The other might be a role of a devil's advocate
link |
00:01:02.920
who presents counter ideas with a goal of arriving
link |
00:01:05.600
at greater understanding through debate.
link |
00:01:08.240
Hopefully both are useful, interesting,
link |
00:01:11.240
and even entertaining at times.
link |
00:01:13.440
I ask for your patience as I learn
link |
00:01:15.400
to have better conversations.
link |
00:01:17.760
This is the Artificial Intelligence Podcast.
link |
00:01:20.800
If you enjoy it, subscribe on YouTube,
link |
00:01:23.120
give it 5,000 iTunes, support it on Patreon,
link |
00:01:26.320
or simply connect with me on Twitter
link |
00:01:28.560
at Lex Freedman spelled F R I D M A M.
link |
00:01:32.520
And now here's my conversation with Gary Marcus.
link |
00:01:37.200
Do you think human civilization will one day have
link |
00:01:40.400
to face an AI driven technological singularity
link |
00:01:42.960
that will in a societal way modify our place
link |
00:01:46.520
in the food chain of intelligent living beings
link |
00:01:49.120
on this planet?
link |
00:01:50.120
I think our place in the food chain has already changed.
link |
00:01:54.880
So there are lots of things people used to do by hand
link |
00:01:57.360
that they do with machine.
link |
00:01:59.200
If you think of a singularity as like one single moment,
link |
00:02:01.840
which is I guess what it suggests,
link |
00:02:03.240
I don't know if it'll be like that,
link |
00:02:04.600
but I think that there's a lot of gradual change
link |
00:02:07.400
and AI is getting better and better.
link |
00:02:09.280
I mean, I'm here to tell you why I think it's not nearly
link |
00:02:11.440
as good as people think, but the overall trend is clear.
link |
00:02:14.400
Maybe Rick Hertzwell thinks it's an exponential
link |
00:02:17.400
and I think it's linear in some cases,
link |
00:02:19.440
it's close to zero right now, but it's all gonna happen.
link |
00:02:22.440
We are gonna get to human level intelligence
link |
00:02:24.840
or whatever you want, what you will,
link |
00:02:27.440
artificial general intelligence at some point.
link |
00:02:30.240
And that's certainly gonna change our place
link |
00:02:31.840
in the food chain.
link |
00:02:32.680
Cause a lot of the tedious things that we do now,
link |
00:02:35.240
we're gonna have machines do
link |
00:02:36.280
and a lot of the dangerous things that we do now,
link |
00:02:38.600
we're gonna have machines do.
link |
00:02:39.920
I think our whole lives are gonna change
link |
00:02:41.720
from people finding their meaning through their work,
link |
00:02:45.040
through people finding their meaning
link |
00:02:46.760
through creative expression.
link |
00:02:48.720
So the singularity will be a very gradual,
link |
00:02:53.720
in fact, removing the meaning of the word singularity.
link |
00:02:56.400
It'll be a very gradual transformation in your view.
link |
00:03:00.320
I think that it'll be somewhere in between
link |
00:03:03.240
and I guess it depends what you mean by gradual and sudden.
link |
00:03:05.480
I don't think it's gonna be one day.
link |
00:03:07.160
I think it's important to realize that intelligence
link |
00:03:10.040
is a multi dimensional variable.
link |
00:03:11.640
So people sort of write this stuff as if like IQ was one number
link |
00:03:17.520
and the day that you hit 262
link |
00:03:20.440
or whatever you displace the human beings.
link |
00:03:22.520
And really there's lots of facets to intelligence.
link |
00:03:25.280
So there's verbal intelligence
link |
00:03:26.680
and there's motor intelligence
link |
00:03:28.520
and there's mathematical intelligence and so forth.
link |
00:03:32.000
Machines in their mathematical intelligence
link |
00:03:34.560
far exceed most people already
link |
00:03:36.840
in their ability to play games.
link |
00:03:38.080
They far exceed most people already.
link |
00:03:40.040
In their ability to understand language,
link |
00:03:41.720
they lag behind my five year old, far behind my five year old.
link |
00:03:44.680
So there are some facets of intelligence,
link |
00:03:46.800
the machines of graphs and some that they haven't.
link |
00:03:49.400
And we have a lot of work left to do
link |
00:03:51.760
to get them to say understand natural language
link |
00:03:54.280
or to understand how to flexibly approach some,
link |
00:03:58.920
kind of novel MacGyver problem solving kind of situation.
link |
00:04:03.000
And I don't know that all of these things will come once.
link |
00:04:05.640
I think there are certain vital prerequisites
link |
00:04:07.960
that we're missing now.
link |
00:04:09.320
So for example, machines don't really have common sense now.
link |
00:04:12.520
So they don't understand that bottles contain water
link |
00:04:15.560
and that people drink water to quench their thirst
link |
00:04:18.160
and that they don't want to dehydrate.
link |
00:04:19.360
They don't know these basic facts about human beings.
link |
00:04:22.080
And I think that that's a great limiting step for many things.
link |
00:04:25.240
It's a great limiting step for reading, for example,
link |
00:04:27.640
because stories depend on things like,
link |
00:04:29.680
oh my God, that person's running out of water.
link |
00:04:31.480
That's why they did this thing.
link |
00:04:33.000
Or if they only had water, they could put out the fire.
link |
00:04:37.040
So you watch a movie and your knowledge
link |
00:04:39.320
about how things work matter.
link |
00:04:41.200
And so a computer can't understand that movie
link |
00:04:44.280
if it doesn't have that background knowledge.
link |
00:04:45.760
Same thing if you read a book.
link |
00:04:47.880
And so there are lots of places where
link |
00:04:49.640
if we had a good machine interpretable set of common sense,
link |
00:04:53.720
many things would accelerate relatively quickly,
link |
00:04:56.560
but I don't think even that is like a single point.
link |
00:04:59.640
There's many different aspects of knowledge.
link |
00:05:02.520
And we might, for example, find that we make a lot of progress
link |
00:05:05.640
on physical reasoning, getting machines to understand,
link |
00:05:08.440
for example, how keys fit into the locks
link |
00:05:10.920
or that kind of stuff or how this gadget here works
link |
00:05:15.400
and so forth and so on.
link |
00:05:17.520
Machines might do that long before they do
link |
00:05:19.480
really good psychological reasoning,
link |
00:05:21.720
because it's easier to get kind of labeled data
link |
00:05:24.360
or to do direct experimentation on a microphone stand
link |
00:05:28.640
than it is to do direct experimentation on human beings
link |
00:05:31.760
to understand the levers that guide them.
link |
00:05:34.800
That's a really interesting point actually,
link |
00:05:36.840
whether it's easier to gain common sense knowledge
link |
00:05:39.680
or psychological knowledge.
link |
00:05:41.720
I would say the common sense knowledge
link |
00:05:43.280
includes both physical knowledge and psychological knowledge.
link |
00:05:46.840
And the argument I was making.
link |
00:05:48.120
It's physical versus psychological.
link |
00:05:49.640
Yeah, physical versus psychological.
link |
00:05:51.080
The argument I was making is physical knowledge
link |
00:05:53.240
might be more accessible,
link |
00:05:54.240
because you could have a robot, for example,
link |
00:05:56.040
lift a bottle, try putting a bottle cap on it,
link |
00:05:58.400
see that it falls off if it does this
link |
00:06:00.400
and see that it could turn it upside down
link |
00:06:02.000
and so the robot could do some experimentation.
link |
00:06:04.680
We do some of our psychological reasoning
link |
00:06:07.200
by looking at our own minds.
link |
00:06:09.240
So I can sort of guess how you might react
link |
00:06:11.560
to something based on how I think I would react to it.
link |
00:06:13.760
And robots don't have that intuition,
link |
00:06:15.960
and they also can't do experiments on people
link |
00:06:18.440
in the same way or we'll probably shut them down.
link |
00:06:20.480
So if we wanted to have robots figure out
link |
00:06:24.240
how I respond to pain by pinching me in different ways,
link |
00:06:27.760
like that's probably,
link |
00:06:29.040
it's not gonna make it past the human subjects board
link |
00:06:31.000
and companies are gonna get sued or whatever.
link |
00:06:32.840
So there's certain kinds of practical experience
link |
00:06:35.800
that are limited or off limits to robots.
link |
00:06:39.640
That's a really interesting point.
link |
00:06:41.040
What is more difficult to gain a grounding in?
link |
00:06:47.560
Because to play devil's advocate,
link |
00:06:49.960
I would say that human behavior is easier expressed
link |
00:06:55.000
in data and digital form.
link |
00:06:56.960
And so when you look at Facebook algorithms,
link |
00:06:59.040
they get to observe human behavior.
link |
00:07:01.120
So you get to study and manipulate even a human behavior
link |
00:07:04.640
in a way that you perhaps cannot study
link |
00:07:07.720
or manipulate the physical world.
link |
00:07:09.560
So it's true why you said pain is like physical pain,
link |
00:07:14.440
but that's again the physical world.
link |
00:07:16.040
Emotional pain might be much easier to experiment with,
link |
00:07:20.120
perhaps unethical, but nevertheless,
link |
00:07:22.760
some would argue it's already going on.
link |
00:07:25.400
I think that you're right, for example,
link |
00:07:27.360
that Facebook does a lot of experimentation
link |
00:07:30.840
in psychological reasoning.
link |
00:07:32.920
In fact, Zuckerberg talked about AI at a talk
link |
00:07:37.080
that he gave nips, I wasn't there,
link |
00:07:39.240
at the conference that's been renamed Neurups,
link |
00:07:41.320
but he used to be called nips when he gave the talk.
link |
00:07:43.640
And he talked about Facebook basically
link |
00:07:45.280
having a gigantic theory of mind.
link |
00:07:47.120
So I think it is certainly possible.
link |
00:07:49.520
I mean, Facebook does some of that.
link |
00:07:51.240
I think they have a really good idea
link |
00:07:52.640
of how to addict people to things.
link |
00:07:53.920
They understand what draws people back to things.
link |
00:07:56.440
And I think they exploit it
link |
00:07:57.280
in ways that I'm not very comfortable with.
link |
00:07:59.200
But even so, I think that there are only some slices
link |
00:08:03.320
of human experience that they can access
link |
00:08:05.640
through the kind of interface they have.
link |
00:08:07.240
And of course, they're doing all kinds of VR stuff,
link |
00:08:08.960
and maybe that'll change and they'll expand their data.
link |
00:08:11.720
And I'm sure that that's part of their goal.
link |
00:08:14.920
So it is an interesting question.
link |
00:08:16.840
I think love, fear, insecurity, all of the things
link |
00:08:23.080
that I would say some of the deepest things
link |
00:08:26.640
about human nature and the human mind
link |
00:08:28.640
could be explored to digital form.
link |
00:08:30.480
It's that you're actually the first person
link |
00:08:32.240
just now that brought up.
link |
00:08:33.680
I wonder what is more difficult
link |
00:08:35.840
because I think folks who are the slow,
link |
00:08:40.240
and we'll talk a lot about deep learning,
link |
00:08:41.840
but the people who are thinking beyond deep learning
link |
00:08:44.840
are thinking about the physical world.
link |
00:08:46.400
You're starting to think about robotics
link |
00:08:48.040
in the home robotics.
link |
00:08:49.160
How do we make robots manipulate objects
link |
00:08:52.320
which requires an understanding of the physical world
link |
00:08:55.000
and then requires common sense reasoning.
link |
00:08:57.280
And that has felt to be like the next step
link |
00:08:59.440
for common sense reasoning.
link |
00:09:00.440
But you've now brought up the idea
link |
00:09:02.120
that there's also the emotional part.
link |
00:09:03.640
And it's interesting whether that's hard or easy.
link |
00:09:06.840
I think some parts of it are and some aren't.
link |
00:09:08.520
So my company that I recently founded
link |
00:09:10.960
with Brod Brooks from MIT for many years
link |
00:09:13.960
and so forth, we're interested in both.
link |
00:09:17.240
We're interested in physical reasoning
link |
00:09:18.600
and psychological reasoning among many other things.
link |
00:09:21.480
And there are pieces of each of these that are accessible.
link |
00:09:26.120
So if you want a robot to figure out
link |
00:09:28.000
whether it can fit under a table,
link |
00:09:29.720
that's a relatively accessible piece of physical reasoning.
link |
00:09:33.640
If you know the height of the table
link |
00:09:34.760
and you know the height of the robot, it's not that hard.
link |
00:09:37.000
If you wanted to do physical reasoning about Jenga,
link |
00:09:39.920
it gets a little bit more complicated
link |
00:09:41.480
and you have to have higher resolution data
link |
00:09:43.840
in order to do it.
link |
00:09:45.240
With psychological reasoning,
link |
00:09:46.880
it's not that hard to know, for example,
link |
00:09:49.320
that people have goals and they like to act on those goals,
link |
00:09:51.680
but it's really hard to know exactly what those goals are.
link |
00:09:54.880
My idea is a frustration.
link |
00:09:56.800
I mean, you could argue it's extremely difficult
link |
00:09:58.800
to understand the sources of human frustration
link |
00:10:01.480
as they're playing Jenga with you or not.
link |
00:10:05.760
You could argue that it's very accessible.
link |
00:10:07.960
There's some things that are gonna be obvious and some not.
link |
00:10:10.440
So I don't think anybody really can do this well yet,
link |
00:10:14.240
but I think it's not inconceivable
link |
00:10:16.640
to imagine machines in the not so distant future
link |
00:10:20.120
being able to understand that if people lose in a game
link |
00:10:24.200
that they don't like that.
link |
00:10:26.240
That's not such a hard thing to program
link |
00:10:27.960
and it's pretty consistent across people.
link |
00:10:30.000
Most people don't enjoy losing
link |
00:10:31.560
and so that makes it relatively easy to code.
link |
00:10:34.640
On the other hand, if you wanted to capture everything
link |
00:10:36.840
about frustration, well, people get frustrated
link |
00:10:39.160
for a lot of different reasons.
link |
00:10:40.320
They might get sexually frustrated,
link |
00:10:42.360
they might get frustrated,
link |
00:10:43.200
they can get their promotion at work,
link |
00:10:45.160
all kinds of different things.
link |
00:10:46.880
And the more you expand the scope,
link |
00:10:48.600
the harder it is for anything like the existing techniques
link |
00:10:51.520
to really do that.
link |
00:10:53.000
So I'm talking to Gary Kasparov next week
link |
00:10:55.640
and he seemed pretty frustrated with this game
link |
00:10:57.800
against the blue.
link |
00:10:58.640
So yeah, well, I'm frustrated with my game
link |
00:11:00.280
against him last year because I played him.
link |
00:11:02.640
I had two excuses, I'll give you my excuses up front
link |
00:11:04.880
that it won't mitigate the outcome.
link |
00:11:07.040
I was jet lagged and I hadn't played in 25 or 30 years,
link |
00:11:11.080
but the outcome is he completely destroyed me
link |
00:11:13.000
and it wasn't even close.
link |
00:11:14.400
Have you ever been beaten in any board game by a machine?
link |
00:11:19.720
I have, I actually played the predecessor to deep blue.
link |
00:11:24.720
Deep thought, I believe it was called.
link |
00:11:27.920
And that too crushed me.
link |
00:11:31.960
And after that, you realize it's over for us.
link |
00:11:35.320
Well, there's no point in my playing deep blue.
link |
00:11:36.800
I mean, it's a waste of deep blues, computation.
link |
00:11:40.240
I mean, I played Kasparov
link |
00:11:41.520
because we both gave lectures this same event
link |
00:11:44.800
and he was playing 30 people.
link |
00:11:46.040
I forgot to mention that.
link |
00:11:46.880
Not only did he crush me,
link |
00:11:47.920
but he crushed 29 other people at the same time.
link |
00:11:50.640
I mean, but the actual philosophical and emotional
link |
00:11:54.880
experience of being beaten by a machine, I imagine,
link |
00:11:57.880
is, I mean, to you who thinks about these things,
link |
00:12:01.360
maybe a profound experience or no, it was a simple.
link |
00:12:05.760
No, I mean, I think.
link |
00:12:06.600
Mathematical experience.
link |
00:12:07.720
Yeah, I think a game like chess particularly
link |
00:12:10.280
where it's, you know, you have perfect information,
link |
00:12:12.720
it's, you know, two player closed end
link |
00:12:14.760
and there's more computation for the computer.
link |
00:12:16.920
It's no surprise the machine wins.
link |
00:12:18.840
I mean, I'm not sad when a computer,
link |
00:12:22.000
I'm not sad when a computer calculates
link |
00:12:23.920
a cube root faster than me.
link |
00:12:25.200
Like, I know I can't win that game.
link |
00:12:27.840
I'm not going to try.
link |
00:12:28.880
Well, with a system like AlphaGo or AlphaZero,
link |
00:12:32.080
do you see a little bit more magic in a system like that,
link |
00:12:35.080
even though it's simply playing a board game,
link |
00:12:37.240
but because there's a strong learning component?
link |
00:12:39.920
You know, I find you should mention that
link |
00:12:41.320
in the context of this conversation
link |
00:12:42.600
because Kasparov and I are working on an article
link |
00:12:45.320
that's going to be called AI is not magic.
link |
00:12:48.240
And, you know, neither one of us thinks that it's magic.
link |
00:12:50.480
And part of the point of this article
link |
00:12:51.960
is that AI is actually a grab bag of different techniques
link |
00:12:55.120
and some of them have,
link |
00:12:56.040
or they each have their own unique strengths and weaknesses.
link |
00:13:00.040
So, you know, you read media accounts and it's like,
link |
00:13:03.080
ooh, AI, it must be magical or can solve any problem.
link |
00:13:06.560
Well, no, some problems are really accessible
link |
00:13:09.480
like chess and Go and other problems like reading
link |
00:13:11.960
are completely outside the current technology.
link |
00:13:14.920
And it's not like you can take the technology
link |
00:13:17.080
that drives AlphaGo and apply it to reading and get anywhere.
link |
00:13:21.320
You know, DeepMind has tried that a bit.
link |
00:13:23.160
They have all kinds of resources.
link |
00:13:24.480
You know, they built AlphaGo and they have,
link |
00:13:26.120
you know, they, I wrote a piece recently
link |
00:13:28.400
that they lost and you can argue about the word lost,
link |
00:13:30.480
but they spent $530 million more than they made last year.
link |
00:13:34.840
So, you know, they're making huge investments.
link |
00:13:36.600
They have a large budget
link |
00:13:37.840
and they have applied the same kinds of techniques
link |
00:13:40.880
to reading or to language.
link |
00:13:43.120
And it's just much less productive there
link |
00:13:45.480
because it's a fundamentally different kind of problem.
link |
00:13:47.840
Chess and Go and so forth are closed in problems.
link |
00:13:50.600
The rules haven't changed in 2,500 years.
link |
00:13:52.960
There's only so many moves you can make.
link |
00:13:54.680
You can talk about the exponential
link |
00:13:56.400
as you look at the combinations of moves.
link |
00:13:58.160
But fundamentally, you know, the Go board has 361 squares.
link |
00:14:01.200
That's it.
link |
00:14:02.040
That's the only, you know, those intersections
link |
00:14:04.040
are the only places that you can place your stone.
link |
00:14:07.240
Whereas when you're reading,
link |
00:14:09.080
the next sentence could be anything.
link |
00:14:11.400
You know, it's completely up to the writer
link |
00:14:13.240
what they're gonna do next.
link |
00:14:14.400
That's fascinating that you think this way.
link |
00:14:16.200
You're clearly a brilliant mind
link |
00:14:17.920
who points out the emperor has no clothes,
link |
00:14:19.680
but so I'll play the role of a person who says...
link |
00:14:22.280
You're gonna put clothes on the emperor?
link |
00:14:23.280
Good luck with it.
link |
00:14:24.120
Romanticizes the notion of the emperor, period.
link |
00:14:27.600
Suggesting that clothes don't even matter.
link |
00:14:30.120
Okay, so that's really interesting
link |
00:14:33.560
that you're talking about language.
link |
00:14:36.240
So there's the physical world
link |
00:14:37.720
of being able to move about the world,
link |
00:14:39.640
making an omelet and coffee and so on.
link |
00:14:41.920
There's language where you first understand
link |
00:14:46.000
what's being written
link |
00:14:47.240
and then maybe even more complicated
link |
00:14:48.800
than that having a natural dialogue.
link |
00:14:51.080
And then there's the game of Go and chess.
link |
00:14:53.560
I would argue that language is much closer to Go
link |
00:14:57.480
than it is to the physical world.
link |
00:14:59.680
Like it is still very constrained.
link |
00:15:01.440
When you say the possibility
link |
00:15:03.560
of the number of sentences that could come,
link |
00:15:05.560
it is huge, but it nevertheless is much more constrained.
link |
00:15:09.240
It feels maybe I'm wrong than the possibilities
link |
00:15:12.680
that the physical world brings us.
link |
00:15:14.480
There's something to what you say
link |
00:15:15.840
in some ways in which I disagree.
link |
00:15:17.640
So one interesting thing about language
link |
00:15:20.560
is that it abstracts away.
link |
00:15:23.320
This bottle, I don't know if they're gonna be
link |
00:15:24.960
in the field of view, is on this table.
link |
00:15:27.200
And I use the word on here
link |
00:15:28.880
and I can use the word on here, maybe not here,
link |
00:15:32.960
but that one word encompasses in analog space
link |
00:15:36.960
a sort of infinite number of possibilities.
link |
00:15:39.360
So there is a way in which language filters down
link |
00:15:43.080
the variation of the world and there's other ways.
link |
00:15:46.680
So we have a grammar and more or less,
link |
00:15:49.960
you have to follow the rules of that grammar.
link |
00:15:51.760
You can break them a little bit,
link |
00:15:52.760
but by and large, we follow the rules of grammar
link |
00:15:55.480
and so that's a constraint on language.
link |
00:15:57.080
So there are ways in which language is a constrained system.
link |
00:15:59.480
On the other hand, there are many arguments.
link |
00:16:02.320
Let's say there's an infinite number of possible sentences
link |
00:16:04.960
and you can establish that by just stacking them up.
link |
00:16:07.720
So I think there's water on the table.
link |
00:16:09.560
You think that I think there's water on the table.
link |
00:16:11.800
Your mother thinks that you think
link |
00:16:12.960
that I think the water is on the table.
link |
00:16:14.640
Your brother thinks that maybe your mom is wrong
link |
00:16:17.020
to think that you think that I think.
link |
00:16:18.720
So we can make it in sentences of infinite length
link |
00:16:22.040
or we can stack up adjectives.
link |
00:16:23.640
This is a very silly example of very, very silly example
link |
00:16:26.480
of very, very, very, very, very, very, very silly example
link |
00:16:28.880
and so forth.
link |
00:16:29.720
So there are good arguments
link |
00:16:31.040
that there's an infinite range of sentences.
link |
00:16:32.520
In any case, it's vast by any reasonable measure.
link |
00:16:35.840
And for example, almost anything in the physical world
link |
00:16:38.000
we can talk about in the language world.
link |
00:16:40.500
And interestingly, many of the sentences that we understand
link |
00:16:43.840
we can only understand if we have a very rich model
link |
00:16:46.880
of the physical world.
link |
00:16:47.880
So I don't ultimately want to adjudicate the debate
link |
00:16:50.660
that I think you just set up, but I find it interesting.
link |
00:16:54.480
Maybe the physical world is even more complicated
link |
00:16:57.200
than language.
link |
00:16:58.040
I think that's fair, but you think
link |
00:17:00.200
that language is really, really complicated.
link |
00:17:03.160
It's really, really hard.
link |
00:17:04.120
Well, it's really, really hard for machines,
link |
00:17:06.120
for linguists, people trying to understand it.
link |
00:17:08.520
It's not that hard for children
link |
00:17:09.680
and that's part of what's driven my whole career.
link |
00:17:12.200
I was a student of Stephen Pinkers
link |
00:17:14.400
and we were trying to figure out
link |
00:17:15.360
why kids could learn language when machines couldn't.
link |
00:17:18.740
I think we're gonna get into language.
link |
00:17:20.600
We're gonna get into communication intelligence
link |
00:17:22.480
and neural networks and so on.
link |
00:17:24.240
But let me return to the high level of the futuristic
link |
00:17:31.080
for a brief moment.
link |
00:17:32.520
So you've written in your book, in your new book,
link |
00:17:37.320
it will be arrogant to suppose that we could forecast
link |
00:17:39.960
where AI will be, where the impact it will have
link |
00:17:42.480
in a thousand years or even 500 years.
link |
00:17:45.160
So let me ask you to be arrogant.
link |
00:17:48.360
What do AI systems with or without physical bodies
link |
00:17:51.520
look like 100 years from now?
link |
00:17:53.520
If you would, just a, you can't predict,
link |
00:17:56.800
but if you were to philosophize and imagine, do.
link |
00:18:00.280
Can I first justify the arrogance
link |
00:18:02.040
before you try to push me beyond it?
link |
00:18:04.080
Sure.
link |
00:18:05.920
I mean, there are examples, like, you know,
link |
00:18:07.720
people figured out how electricity worked.
link |
00:18:09.720
They had no idea that that was gonna lead to cell phones,
link |
00:18:12.280
right?
link |
00:18:13.120
I mean, things can move awfully fast
link |
00:18:15.600
once new technologies are perfected.
link |
00:18:17.920
Even when they made transistors,
link |
00:18:19.440
they weren't really thinking that cell phones
link |
00:18:21.080
would lead to social networking.
link |
00:18:23.320
There are nevertheless predictions of the future,
link |
00:18:25.720
which are statistically unlikely to come to be,
link |
00:18:28.800
but nevertheless is the best.
link |
00:18:29.840
You're asking me to be wrong.
link |
00:18:31.360
I'm asking you to be.
link |
00:18:32.200
Which way would I like to be wrong?
link |
00:18:34.000
Pick the least unlikely to be wrong thing,
link |
00:18:37.480
even though it's most very likely to be wrong.
link |
00:18:39.720
I mean, here's some things
link |
00:18:40.560
that we can safely predict, I suppose.
link |
00:18:42.720
We can predict that AI will be faster than it is now.
link |
00:18:47.240
It will be cheaper than it is now.
link |
00:18:49.480
It will be better in the sense of being more general
link |
00:18:52.840
and applicable in more places.
link |
00:18:56.960
It will be pervasive.
link |
00:18:59.280
You know, I mean, these are easy predictions.
link |
00:19:01.560
I'm sort of modeling them in my head
link |
00:19:03.280
on Jeff Bezos's famous predictions.
link |
00:19:05.800
He says, I can't predict the future.
link |
00:19:07.280
Not in every way.
link |
00:19:08.120
I'm paraphrasing, but I can predict
link |
00:19:10.560
that people will never wanna pay more money for their stuff.
link |
00:19:13.200
They're never gonna want it to take longer to get there.
link |
00:19:15.200
And you know, so like, you can't predict everything,
link |
00:19:17.760
but you can predict some things.
link |
00:19:18.880
Sure, of course it's gonna be faster and better.
link |
00:19:20.920
And what we can't really predict
link |
00:19:24.520
is the full scope of where AI will be in a certain period.
link |
00:19:28.720
I mean, I think it's safe to say
link |
00:19:31.280
that although I'm very skeptical about current AI,
link |
00:19:35.720
that it's possible to do much better.
link |
00:19:37.720
You know, there's no in principle at argument
link |
00:19:39.760
that says AI is an insolvable problem,
link |
00:19:42.160
that there's magic inside our brains
link |
00:19:43.640
that will never be captured.
link |
00:19:45.000
I mean, I've heard people make those kind of arguments.
link |
00:19:46.840
I don't think they're very good.
link |
00:19:49.040
So AI is gonna come and probably 500 years of planning
link |
00:19:54.720
to get there.
link |
00:19:55.560
And then once it's here, it really will change everything.
link |
00:19:59.280
So when you say AI is gonna come,
link |
00:20:00.720
are you talking about human level intelligence?
link |
00:20:03.680
So maybe I...
link |
00:20:05.000
I like the term general intelligence.
link |
00:20:06.680
So I don't think that the ultimate AI,
link |
00:20:09.560
if there is such a thing, is gonna look just like humans.
link |
00:20:12.000
I think it's gonna do some things
link |
00:20:13.640
that humans do better than current machines,
link |
00:20:16.600
like reason flexibly.
link |
00:20:18.600
And understand language and so forth.
link |
00:20:21.200
But it doesn't mean they have to be identical to humans.
link |
00:20:23.480
So for example, humans have terrible memory
link |
00:20:26.000
and they suffer from what some people call
link |
00:20:28.960
motivated reasoning.
link |
00:20:29.960
So they like arguments that seem to support them
link |
00:20:32.480
and they dismiss arguments that they don't like.
link |
00:20:35.480
There's no reason that a machine should ever do that.
link |
00:20:38.720
So you see that those limitations of memory
link |
00:20:42.320
as a bug, not a feature?
link |
00:20:43.960
Absolutely.
link |
00:20:44.880
I'll say two things about that.
link |
00:20:46.680
One is I was on a panel with Danny Kahneman,
link |
00:20:48.480
the Nobel Prize winner last night,
link |
00:20:50.320
and we were talking about this stuff.
link |
00:20:51.800
And I think what we converged on is that
link |
00:20:54.040
the humans are a low bar to exceed.
link |
00:20:56.160
They may be outside of our skill right now,
link |
00:20:58.960
but as AI programmers,
link |
00:21:01.160
but eventually AI will exceed it.
link |
00:21:04.320
So we're not talking about human level AI.
link |
00:21:06.120
We're talking about general intelligence
link |
00:21:07.920
that can do all kinds of different things
link |
00:21:09.480
and do it without some of the flaws that human beings have.
link |
00:21:12.240
The other thing I'll say is I wrote a whole book actually
link |
00:21:14.000
about the flaws of humans.
link |
00:21:15.200
It's actually a nice counterpoint to the current book.
link |
00:21:19.120
So I wrote a book called Cluj,
link |
00:21:21.360
which was about the limits of the human mind.
link |
00:21:24.000
The current book is kind of about those few things
link |
00:21:26.320
that humans do a lot better than machines.
link |
00:21:28.720
Do you think it's possible that the flaws of the human mind,
link |
00:21:31.720
the limits of memory, our mortality,
link |
00:21:34.960
our bias is a strength, not a weakness.
link |
00:21:40.240
That is the thing that enables
link |
00:21:43.480
from which motivation springs and meaning springs.
link |
00:21:47.760
I've heard a lot of arguments like this.
link |
00:21:49.480
I've never found them that convincing.
link |
00:21:50.880
I think that there's a lot of making lemonade out of lemons.
link |
00:21:55.120
So we, for example, do a lot of free association
link |
00:21:58.280
where one idea just leads to the next
link |
00:22:00.800
and they're not really that well connected.
link |
00:22:02.600
And we enjoy that and we make poetry out of it
link |
00:22:04.520
and we make kind of movies with free associations
link |
00:22:07.120
and it's fun and whatever.
link |
00:22:08.160
I don't think that's really a virtue of the system.
link |
00:22:12.320
I think that the limitations in human reasoning
link |
00:22:15.360
actually get us in a lot of trouble.
link |
00:22:16.600
Like for example, politically, we can't see eye to eye
link |
00:22:19.320
because we have the motivational reasoning I was talking about
link |
00:22:22.000
and something related called confirmation bias.
link |
00:22:25.120
So we have all of these problems
link |
00:22:26.480
that actually make for a rougher society
link |
00:22:28.640
because we can't get along
link |
00:22:29.960
because we can't interpret the data in shared ways.
link |
00:22:34.360
And then we do some nice stuff with that.
link |
00:22:36.480
So my free associations are different from yours
link |
00:22:38.920
and you're kind of amused by them and that's great.
link |
00:22:41.640
And hence poetry.
link |
00:22:42.680
So there are lots of ways in which we take a lousy situation
link |
00:22:46.200
and make it good.
link |
00:22:47.600
Another example would be our memories are terrible.
link |
00:22:50.640
So we play games like concentration
link |
00:22:52.360
where you flip over the two cards, try to find a pair.
link |
00:22:55.000
Can you imagine a computer playing that?
link |
00:22:56.520
Computers like this is the dullest game in the world.
link |
00:22:58.320
I know where all the cards are.
link |
00:22:59.320
I see it once.
link |
00:23:00.160
I know where it is.
link |
00:23:01.000
What are you even talking about?
link |
00:23:02.600
So we make a fun game out of having this terrible memory.
link |
00:23:07.080
So we are imperfect in discovering and optimizing
link |
00:23:12.240
some kind of utility function.
link |
00:23:13.560
But you think in general, there is a utility function.
link |
00:23:16.280
There's an objective function that's better than others.
link |
00:23:18.840
I didn't say that.
link |
00:23:21.040
The presumption, when you say...
link |
00:23:24.720
I think you could design a better memory system.
link |
00:23:27.240
You could argue about utility functions
link |
00:23:29.880
and how you wanna think about that.
link |
00:23:32.080
But objectively, it would be really nice
link |
00:23:34.160
to do some of the following things.
link |
00:23:36.480
To get rid of memories that are no longer useful.
link |
00:23:40.840
Like objectively, that would just be good.
link |
00:23:42.680
And we're not that good at it.
link |
00:23:43.600
So when you park in the same lot every day,
link |
00:23:46.520
you confuse where you parked today
link |
00:23:47.920
with where you parked yesterday
link |
00:23:48.840
with where you parked the day before and so forth.
link |
00:23:50.720
So you blur together a series of memories.
link |
00:23:52.600
There's just no way that that's optimal.
link |
00:23:55.360
I mean, I've heard all kinds of wacky arguments
link |
00:23:57.040
of people trying to defend that.
link |
00:23:58.120
But in the end of the day,
link |
00:23:58.960
I don't think any of them hold water.
link |
00:24:00.400
Or trauma memories of traumatic events
link |
00:24:02.800
would be possibly a very nice feature to have
link |
00:24:05.640
to get rid of those.
link |
00:24:06.800
It'd be great if you could just be like,
link |
00:24:08.320
I'm gonna wipe this sector.
link |
00:24:10.600
I'm done with that.
link |
00:24:12.040
I didn't have fun last night.
link |
00:24:13.320
I don't wanna think about it anymore.
link |
00:24:14.760
Woop, bye bye.
link |
00:24:15.880
I'm gone, but we can't.
link |
00:24:17.800
Do you think it's possible to build a system?
link |
00:24:20.400
So you said human level intelligence is a weird concept,
link |
00:24:23.400
but...
link |
00:24:24.240
Well, I'm saying I prefer general intelligence.
link |
00:24:25.440
General intelligence.
link |
00:24:26.280
I mean, human level intelligence is a real thing.
link |
00:24:28.120
And you could try to make a machine
link |
00:24:29.880
that matches people or something like that.
link |
00:24:32.000
I'm saying that per se shouldn't be the objective,
link |
00:24:34.280
but rather that we should learn from humans
link |
00:24:37.280
the things they do well and incorporate that into our AI
link |
00:24:39.720
just as we incorporate the things that machines do well
link |
00:24:42.160
that people do terribly.
link |
00:24:43.320
So I mean, it's great that AI systems
link |
00:24:45.840
can do all this brute force computation that people can't.
link |
00:24:48.520
And one of the reasons I work on this stuff
link |
00:24:50.880
is because I would like to see machine solve problems
link |
00:24:53.360
that people can't that combine the strength
link |
00:24:56.080
or that in order to be solved would combine
link |
00:24:59.520
the strengths of machines to do all this computation
link |
00:25:02.280
with the ability, let's say, of people to read.
link |
00:25:04.280
So I'd like machines that can read
link |
00:25:06.240
the entire medical literature in a day.
link |
00:25:08.720
7,000 new papers or whatever the numbers comes out every day.
link |
00:25:11.800
There's no way for any doctor or whatever to read them all.
link |
00:25:15.840
Machine that could read would be a brilliant thing.
link |
00:25:18.040
And that would be strengths of brute force computation
link |
00:25:21.160
combined with kind of subtlety and understanding medicine
link |
00:25:24.360
that a good doctor or scientist has.
link |
00:25:26.960
So if we can linger a little bit
link |
00:25:28.120
on the idea of general intelligence.
link |
00:25:29.720
So Yanlacun believes that human intelligence
link |
00:25:32.880
is in general at all, it's very narrow.
link |
00:25:35.600
How do you think, I don't think that makes sense.
link |
00:25:38.160
We have lots of narrow intelligences
link |
00:25:40.160
for specific problems.
link |
00:25:42.160
But the fact is like anybody can walk into,
link |
00:25:46.000
let's say a Hollywood movie and reason about the content
link |
00:25:49.160
of almost anything that goes on there.
link |
00:25:51.720
So you can reason about what happens in a bank robbery
link |
00:25:55.200
or what happens when someone is infertile
link |
00:25:58.640
and wants to go to IVF to try to have a child.
link |
00:26:02.800
Or you can, the list is essentially endless.
link |
00:26:05.960
And not everybody understands every scene in a movie,
link |
00:26:09.600
but there's a huge range of things
link |
00:26:11.760
that pretty much any ordinary adult can understand.
link |
00:26:15.080
His argument is that actually the set of things
link |
00:26:19.400
seems large to us humans because we're very limited
link |
00:26:22.880
in considering the kind of possibilities
link |
00:26:25.520
of experience as they're possible.
link |
00:26:27.360
But in fact, the amount of experience that are possible
link |
00:26:30.200
is infinitely larger.
link |
00:26:32.520
Well, I mean, if you wanna make an argument
link |
00:26:35.120
that humans are constrained in what they can understand,
link |
00:26:38.800
I have no issue with that, I think that's right.
link |
00:26:41.640
But it's still not the same thing at all
link |
00:26:44.440
as saying, here's a system that can play go.
link |
00:26:47.480
It's been trained on five million games.
link |
00:26:49.760
And then I say, can it play on a rectangular board
link |
00:26:52.600
rather than a square board?
link |
00:26:53.680
And you say, well, if I retrain it from scratch
link |
00:26:56.560
on another five million games, I can't.
link |
00:26:58.320
That's really, really narrow and that's where we are.
link |
00:27:01.120
We don't have even a system that could play go
link |
00:27:05.120
and then without further retraining
link |
00:27:07.080
play on a rectangular board,
link |
00:27:08.680
which any good human could do with very little problem.
link |
00:27:12.560
So that's what I mean by narrow.
link |
00:27:14.840
And so it's just wordplay to say.
link |
00:27:16.840
Then it's semantics, then it's just words.
link |
00:27:19.280
Then yeah, you mean general in a sense
link |
00:27:21.120
that you can do all kinds of go board shapes flexibly.
link |
00:27:25.760
Well, I mean, that would be like a first step
link |
00:27:28.080
in the right direction,
link |
00:27:28.920
but obviously that's not what it really meaning.
link |
00:27:30.520
You're kidding.
link |
00:27:32.400
What I mean by a general is that you could transfer
link |
00:27:36.160
the knowledge you learn in one domain to another.
link |
00:27:38.960
So if you learn about bank robberies in movies
link |
00:27:43.320
and there's chase scenes,
link |
00:27:44.800
then you can understand that amazing scene in Breaking Bad
link |
00:27:47.720
when Walter White has a car chase scene
link |
00:27:50.560
with only one person, he's the only one in it.
link |
00:27:52.640
And you can reflect on how that car chase scene
link |
00:27:55.520
is like all the other car chase scenes you've ever seen
link |
00:27:58.240
and totally different and why that's cool.
link |
00:28:01.160
And the fact that the number of domains
link |
00:28:03.120
you can do that with is finite,
link |
00:28:04.560
doesn't make it less general.
link |
00:28:05.760
So the idea of general is you can just do it
link |
00:28:07.320
on a lot of transfer across a lot of domains.
link |
00:28:09.400
Yeah, I mean, I'm not saying humans are infinitely general
link |
00:28:11.760
or that humans are perfect.
link |
00:28:12.960
I just said a minute ago, it's a low bar,
link |
00:28:15.360
but it's just, it's a low bar.
link |
00:28:17.440
But right now, like the bar is here and we're there
link |
00:28:20.480
and eventually we'll get way past it.
link |
00:28:22.640
So speaking of low bars,
link |
00:28:25.600
you've highlighted in your new book as well,
link |
00:28:27.440
but a couple of years ago wrote a paper
link |
00:28:29.360
titled Deep Learning a Critical Appraisal
link |
00:28:31.280
that lists 10 challenges faced by
link |
00:28:34.040
current deep learning systems.
link |
00:28:36.000
So let me summarize them as data efficiency,
link |
00:28:40.160
transfer learning, hierarchical knowledge,
link |
00:28:42.920
open ended inference, explainability,
link |
00:28:46.320
integrating prior knowledge, causal reasoning,
link |
00:28:49.640
modeling on a stable world, robustness, adversarial examples
link |
00:28:53.200
and so on.
link |
00:28:54.120
And then my favorite probably is reliability
link |
00:28:56.840
and engineering of real world systems.
link |
00:28:59.120
So whatever people can read the paper,
link |
00:29:01.600
they should definitely read the paper,
link |
00:29:02.920
should definitely read your book.
link |
00:29:04.320
But which of these challenges is solved in your view
link |
00:29:08.120
has the biggest impact on the AI community?
link |
00:29:11.040
It's a very good question.
link |
00:29:13.920
And I'm gonna be evasive because I think that
link |
00:29:16.320
they go together a lot.
link |
00:29:17.960
So some of them might be solved independently of others,
link |
00:29:21.400
but I think a good solution to AI starts
link |
00:29:24.200
by having real what I would call cognitive models
link |
00:29:27.480
of what's going on.
link |
00:29:28.440
So right now we have an approach that's dominant
link |
00:29:31.320
where you take statistical approximations of things,
link |
00:29:33.920
but you don't really understand them.
link |
00:29:35.760
So you know that bottles are correlated
link |
00:29:38.520
in your data with bottle caps,
link |
00:29:40.280
but you don't understand that there's a thread
link |
00:29:42.240
on the bottle cap that fits with the thread on the bottle
link |
00:29:45.280
and that that's tightens in if I tighten enough
link |
00:29:47.800
that there's a seal and the water can come out.
link |
00:29:49.640
Like there's no machine that understands that.
link |
00:29:51.960
And having a good cognitive model
link |
00:29:53.800
of that kind of everyday phenomena
link |
00:29:55.480
is what we call common sense.
link |
00:29:56.600
And if you had that, then a lot of these other things
link |
00:29:58.880
start to fall into at least a little bit better place.
link |
00:30:02.760
So right now you're like learning correlations
link |
00:30:04.840
between pixels when you play a video game
link |
00:30:06.520
or something like that.
link |
00:30:07.640
And it doesn't work very well.
link |
00:30:08.920
It works when the video game is just the way
link |
00:30:10.680
that you studied it and then you alter the video game
link |
00:30:12.920
in small ways like you move the paddle
link |
00:30:14.520
and break out a few pixels and the system falls apart.
link |
00:30:17.440
Because it doesn't understand,
link |
00:30:19.000
it doesn't have a representation of a paddle,
link |
00:30:20.880
a ball, a wall, a set of bricks and so forth.
link |
00:30:23.360
And so it's reasoning at the wrong level.
link |
00:30:26.440
So the idea of common sense, it's full of mystery.
link |
00:30:30.200
You've worked on it, but it's nevertheless full of mystery,
link |
00:30:33.560
full of promise.
link |
00:30:34.720
What does common sense mean?
link |
00:30:36.560
What does knowledge mean?
link |
00:30:38.000
So the way you've been discussing it now is very intuitive.
link |
00:30:40.920
It makes a lot of sense that that is something we should have
link |
00:30:43.160
and that's something deep learning systems don't have.
link |
00:30:45.600
But the argument could be that we're oversimplifying it
link |
00:30:49.720
because we're oversimplifying the notion of common sense
link |
00:30:53.160
because that's how it feels like we as humans
link |
00:30:57.120
at the cognitive level approach problems.
link |
00:30:59.320
So maybe...
link |
00:31:00.160
A lot of people aren't actually gonna read my book.
link |
00:31:03.320
But if they did read the book,
link |
00:31:05.200
one of the things that might come as a surprise to them
link |
00:31:07.120
is that we actually say a common sense is really hard
link |
00:31:10.640
and really complicated.
link |
00:31:11.640
So my critics know that I like common sense,
link |
00:31:15.160
but that chapter actually starts by us beating up
link |
00:31:18.600
not on deep learning,
link |
00:31:19.880
but kind of on our own home team as it will.
link |
00:31:21.960
So Ernie and I are first and foremost people that believe
link |
00:31:26.040
in at least some of what good old fashioned AI tried to do.
link |
00:31:28.680
So we believe in symbols and logic and programming.
link |
00:31:32.400
Things like that are important.
link |
00:31:33.760
And we go through why even those tools
link |
00:31:37.040
that we hold fairly dear aren't really enough.
link |
00:31:39.560
So we talk about why common sense is actually many things.
link |
00:31:42.680
And some of them fit really well with those
link |
00:31:45.320
classical sets of tools.
link |
00:31:46.560
So things like taxonomy.
link |
00:31:48.240
So I know that a bottle is an object
link |
00:31:51.480
or it's a vessel, let's say.
link |
00:31:52.840
And I know a vessel is an object
link |
00:31:54.480
and objects are material things in the physical world.
link |
00:31:57.600
So I can make some inferences.
link |
00:32:00.520
If I know that vessels need to not have holes in them,
link |
00:32:07.040
then I can infer that in order to carry their contents
link |
00:32:09.560
that I can infer that a bottle shouldn't have a hole
link |
00:32:11.560
in it in order to carry its contents.
link |
00:32:12.880
So you can do hierarchical inference and so forth.
link |
00:32:15.840
And we say that's great,
link |
00:32:17.280
but it's only a tiny piece of what you need for common sense.
link |
00:32:21.120
And we give lots of examples that don't fit into that.
link |
00:32:23.440
So another one that we talk about is a cheese grater.
link |
00:32:26.480
You've got holes in a cheese grater.
link |
00:32:28.040
You've got a handle on top.
link |
00:32:29.520
You can build a model in the game engine sense of a model
link |
00:32:33.400
so that you could have a little cartoon character
link |
00:32:35.680
flying around through the holes of the grater.
link |
00:32:38.000
But we don't have a system yet.
link |
00:32:40.000
Taxonomy doesn't help us that much.
link |
00:32:41.640
It really understands why the handle is on top
link |
00:32:43.760
and what you do with the handle
link |
00:32:45.240
or why all of those circles are sharp
link |
00:32:47.600
or how you'd hold the cheese with respect to the grater
link |
00:32:50.480
in order to make it actually work.
link |
00:32:52.120
Do you think these ideas are just abstractions
link |
00:32:55.020
that could emerge on a system like
link |
00:32:57.880
a very large deep neural network?
link |
00:32:59.920
I'm a skeptic that that kind of emergence per se can work.
link |
00:33:03.120
So I think that deep learning might play a role
link |
00:33:05.840
in the systems that do what I want systems to do,
link |
00:33:08.760
but it won't do it by itself.
link |
00:33:09.920
I've never seen a deep learning system
link |
00:33:13.160
really extract an abstract concept.
link |
00:33:15.920
What they do, principle reasons for that,
link |
00:33:18.840
stemming from how back propagation works,
link |
00:33:20.560
how the architectures are set up.
link |
00:33:22.920
One example is deep learning people
link |
00:33:25.120
actually all build in something called convolution
link |
00:33:29.640
which Jan Lacoon is famous for, which is an abstraction.
link |
00:33:33.200
They don't have their systems learn this.
link |
00:33:34.960
So the abstraction is an object that looks the same
link |
00:33:37.760
if it appears in different places.
link |
00:33:39.200
And what Lacoon figured out and why,
link |
00:33:41.960
essentially why he was a co winner of the Turing word
link |
00:33:44.320
was that if you program this in innately,
link |
00:33:47.640
then your system would be a whole lot more efficient.
link |
00:33:50.680
In principle, this should be learnable,
link |
00:33:53.200
but people don't have systems that kind of reify things
link |
00:33:56.240
and make them more abstract.
link |
00:33:58.000
And so what you'd really wind up with,
link |
00:34:00.440
if you don't program that in advance as a system,
link |
00:34:02.720
the kind of realizes that this is the same thing as this,
link |
00:34:05.460
but then I take your little clock there
link |
00:34:07.000
and I move it over and it doesn't realize
link |
00:34:08.400
that the same thing applies to the clock.
link |
00:34:10.480
So the really nice thing, you're right,
link |
00:34:12.680
that convolution is just one of the things
link |
00:34:14.760
that's like it's an innate feature
link |
00:34:17.160
that's programmed by the human expert,
link |
00:34:19.240
but we need more of those, not less.
link |
00:34:21.240
So the, but the nice feature is,
link |
00:34:23.720
it feels like that requires coming up with that brilliant
link |
00:34:27.240
idea can get your Turing award,
link |
00:34:29.800
but it requires less effort than encoding
link |
00:34:34.760
and something we'll talk about the expert system.
link |
00:34:36.640
So encoding a lot of knowledge by hand.
link |
00:34:40.040
So it feels like one, there's a huge amount of limitations
link |
00:34:43.480
which you clearly outline with deep learning,
link |
00:34:46.480
but the nice feature of deep learning,
link |
00:34:47.800
whatever it is able to accomplish,
link |
00:34:49.600
it does it, it does a lot of stuff automatically
link |
00:34:53.520
without human intervention.
link |
00:34:54.920
Well, and that's part of why people love it, right?
link |
00:34:57.120
But I always think of this quote from Bertrand Russell,
link |
00:34:59.800
which is it has all the advantages of theft over honest toil.
link |
00:35:04.400
It's really hard to program into a machine
link |
00:35:08.120
a notion of causality or, you know,
link |
00:35:10.000
even how a bottle works or what containers are.
link |
00:35:12.640
Ernie Davis and I wrote a, I don't know,
link |
00:35:14.240
45 page academic paper trying just to understand
link |
00:35:18.000
what a container is, which I don't think anybody
link |
00:35:19.920
ever read the paper, but it's a very detailed analysis
link |
00:35:24.120
of all the things, not even all,
link |
00:35:25.920
some of the things you need to do
link |
00:35:27.120
in order to understand a container.
link |
00:35:28.560
It would be a whole lot nice and, you know,
link |
00:35:30.960
I'm a co author on the paper,
link |
00:35:32.200
I made it a little bit better,
link |
00:35:33.200
but Ernie did the hard work for that particular paper.
link |
00:35:36.600
And it took him like three months
link |
00:35:38.080
to get the logical statements correct.
link |
00:35:40.680
And maybe that's not the right way to do it.
link |
00:35:42.840
It's a way to do it, but on that way of doing it,
link |
00:35:46.120
it's really hard work to do something
link |
00:35:48.440
as simple as understanding containers.
link |
00:35:50.280
And nobody wants to do that hard work.
link |
00:35:52.840
Even Ernie didn't want to do that hard work.
link |
00:35:55.600
Everybody would rather just like feed their system in
link |
00:35:58.360
with a bunch of videos with a bunch of containers
link |
00:36:00.320
and have the systems infer how can containers work.
link |
00:36:03.800
It would be like so much less effort,
link |
00:36:05.400
let the machine do the work.
link |
00:36:06.800
And so I understand the impulse,
link |
00:36:08.200
I understand why people want to do that.
link |
00:36:10.200
I just don't think that it works.
link |
00:36:11.840
I've never seen anybody build a system
link |
00:36:14.560
that in a robust way can actually watch videos
link |
00:36:18.680
and predict exactly, you know,
link |
00:36:20.160
which containers would leak
link |
00:36:21.280
and which ones wouldn't or something like,
link |
00:36:23.520
and I know someone's gonna go out and do that
link |
00:36:25.040
since I said it, and I look forward to seeing it,
link |
00:36:28.080
but getting these things to work robustly
link |
00:36:30.520
is really, really hard.
link |
00:36:32.880
So Yann LeCun, who was my colleague at NYU
link |
00:36:36.120
for many years, thinks that the hard work
link |
00:36:38.800
should go into defining an unsupervised learning algorithm
link |
00:36:43.120
that will watch videos, use the next frame basically
link |
00:36:46.640
in order to tell it what's going on.
link |
00:36:48.520
And he thinks that's the royal road
link |
00:36:49.920
and he's willing to put in the work
link |
00:36:51.240
in devising that algorithm.
link |
00:36:53.280
Then he wants the machine to do the rest.
link |
00:36:55.560
And again, I understand the impulse.
link |
00:36:57.800
My intuition, based on years of watching this stuff
link |
00:37:01.720
and making predictions 20 years ago that still hold,
link |
00:37:03.960
even though there's a lot more computation and so forth,
link |
00:37:06.480
is that we actually have to do a different kind of hard work,
link |
00:37:08.520
which is more like building a design specification
link |
00:37:11.320
for what we want the system to do,
link |
00:37:13.120
doing hard engineering work to figure out
link |
00:37:15.040
how we do things like what Yann did for convolution
link |
00:37:18.440
in order to figure out how to encode complex knowledge
link |
00:37:21.680
into the systems.
link |
00:37:22.640
The current systems don't have that much knowledge
link |
00:37:25.320
other than convolution,
link |
00:37:26.920
which is again this, you know,
link |
00:37:28.120
object experience in different places
link |
00:37:30.560
and having the same perception, I guess I'll say.
link |
00:37:34.480
Same appearance.
link |
00:37:36.720
People don't wanna do that work.
link |
00:37:38.280
They don't see how to naturally fit one with the other.
link |
00:37:41.480
I think that's, yes, absolutely.
link |
00:37:43.320
But also on the expert system side,
link |
00:37:45.560
there's a temptation to go too far the other way.
link |
00:37:47.640
So it was just having an expert sort of sit down
link |
00:37:49.880
and encode the description, the framework
link |
00:37:52.720
for what a container is,
link |
00:37:54.080
and then having the system reason for the rest.
link |
00:37:56.560
For my view, like one really exciting possibility
link |
00:37:59.280
is of active learning where it's continuous interaction
link |
00:38:02.200
between a human and machine.
link |
00:38:04.120
As the machine, there's kind of deep learning type
link |
00:38:07.080
extraction of information from data patterns and so on,
link |
00:38:10.160
but humans also guiding the learning procedures,
link |
00:38:14.680
guiding both the process and the framework
link |
00:38:19.960
of how the machine learns, whatever the task is.
link |
00:38:22.200
I was with you with almost everything you said,
link |
00:38:24.120
except the phrase deep learning.
link |
00:38:26.520
What I think you really want there
link |
00:38:28.240
is a new form of machine learning.
link |
00:38:30.520
So let's remember deep learning is a particular way
link |
00:38:33.000
of doing machine learning.
link |
00:38:34.040
Most often it's done with supervised data
link |
00:38:37.040
for perceptual categories.
link |
00:38:38.840
There are other things you can do with deep learning.
link |
00:38:41.760
Some of them quite technical,
link |
00:38:42.760
but the standard use of deep learning
link |
00:38:44.640
is I have a lot of examples and I have labels for them.
link |
00:38:47.640
So here are pictures.
link |
00:38:48.840
This one's the Eiffel Tower.
link |
00:38:50.400
This one's the Sears Tower.
link |
00:38:51.680
This one's the Empire State Building.
link |
00:38:53.360
This one's a cat.
link |
00:38:54.200
This one's a pig and so forth.
link |
00:38:55.040
You just get millions of examples, millions of labels.
link |
00:38:58.880
And deep learning is extremely good at that.
link |
00:39:01.240
It's better than any other solution
link |
00:39:02.680
that anybody has devised,
link |
00:39:04.440
but it is not good at representing abstract knowledge.
link |
00:39:07.400
It's not good at representing things like bottles
link |
00:39:10.720
contain liquid and have tops to them and so forth.
link |
00:39:14.320
It's not very good at learning
link |
00:39:15.840
or representing that kind of knowledge.
link |
00:39:17.840
It is an example of having a machine learn something,
link |
00:39:21.320
but it's a machine that learns a particular kind of thing,
link |
00:39:23.920
which is object classification.
link |
00:39:25.520
It's not a particularly good algorithm
link |
00:39:27.720
for learning about the abstractions
link |
00:39:29.600
that govern our world.
link |
00:39:30.760
There may be such a thing,
link |
00:39:33.040
part of what we counsel in the book
link |
00:39:34.280
is maybe people should be working on devising such things.
link |
00:39:36.960
So one possibility, just I wonder what you think about it,
link |
00:39:40.520
is deep neural networks do form abstractions,
link |
00:39:45.160
but they're not accessible to us humans
link |
00:39:48.480
in terms of we can't.
link |
00:39:49.320
There's some truth in that.
link |
00:39:50.720
So is it possible that either current or future neural networks
link |
00:39:54.760
form very high level abstractions,
link |
00:39:56.480
which are as powerful as our human abstractions of common sense,
link |
00:40:02.360
we just can't get a hold of them.
link |
00:40:04.840
And so the problem is essentially
link |
00:40:06.560
we need to make them explainable.
link |
00:40:09.160
This is an astute question,
link |
00:40:10.560
but I think the answer is at least partly no.
link |
00:40:13.000
One of the kinds of classical neural network architecture
link |
00:40:16.000
is what we call an auto associator.
link |
00:40:17.560
It just tries to take an input, goes through a set of hidden layers
link |
00:40:21.440
and comes out with an output.
link |
00:40:23.000
And it's supposed to learn essentially the identity function,
link |
00:40:25.400
that your input is the same as your output.
link |
00:40:27.200
So you think of this binary numbers,
link |
00:40:28.400
you've got like the one, the two, the four, the eight,
link |
00:40:30.600
the 16 and so forth.
link |
00:40:32.120
And so if you want to input 24, you turn on the 16,
link |
00:40:35.000
you turn on the eight.
link |
00:40:35.840
It's like binary one, one and bunch of zeros.
link |
00:40:38.920
So I did some experiments in 1998
link |
00:40:41.600
with the precursors of contemporary deep learning.
link |
00:40:46.720
And what I showed was you could train these networks
link |
00:40:50.520
on all the even numbers
link |
00:40:52.120
and they would never generalize to the odd number.
link |
00:40:54.720
A lot of people thought that I was, I don't know,
link |
00:40:56.760
an idiot or faking the experiment or wasn't true or whatever,
link |
00:41:00.160
but it is true that with this class of networks
link |
00:41:03.320
that we had in that day,
link |
00:41:04.920
that they would never, ever make this generalization.
link |
00:41:07.440
And it's not that the networks were stupid,
link |
00:41:09.680
it's that they see the world in a different way than we do.
link |
00:41:13.440
They were basically concerned,
link |
00:41:14.720
what is the probability that the right most output node
link |
00:41:18.640
is going to be a one?
link |
00:41:20.000
And as far as they were concerned,
link |
00:41:21.240
in everything that they'd ever been trained on,
link |
00:41:22.840
it was a zero, that node had never been turned on.
link |
00:41:27.040
And so they figured, why turn it on now?
link |
00:41:28.960
Whereas a person would look at the same problem
link |
00:41:30.720
and say, well, it's obvious,
link |
00:41:31.720
we're just doing the thing that corresponds.
link |
00:41:33.800
The Latin for it is mutatus, mutatus,
link |
00:41:35.520
we'll change what needs to be changed.
link |
00:41:38.200
And we do this, this is what algebra is.
link |
00:41:40.520
So I can do f of x equals y plus two
link |
00:41:43.840
and I can do it for a couple of values.
link |
00:41:45.360
I can tell you if y is three, then x is five
link |
00:41:47.720
and if y is four, x is six.
link |
00:41:49.160
And now I can do it with some totally different number,
link |
00:41:50.960
like a million, then you can say,
link |
00:41:52.000
well, obviously it's a million and two
link |
00:41:53.120
because you have an algebraic operation
link |
00:41:55.600
that you're applying to a variable.
link |
00:41:57.440
And deep learning systems kind of emulate that,
link |
00:42:00.600
but they don't actually do it.
link |
00:42:02.480
The particular example,
link |
00:42:04.120
you could fudge a solution to that particular problem.
link |
00:42:08.120
The general form of that problem remains
link |
00:42:10.480
that what they learn is really correlations
link |
00:42:12.360
between different input and output nodes.
link |
00:42:14.280
And they're complex correlations
link |
00:42:15.640
with multiple nodes involved and so forth,
link |
00:42:18.320
but ultimately they're correlative.
link |
00:42:20.200
They're not structured over these operations
link |
00:42:22.360
over variables.
link |
00:42:23.200
Now, someday people may do a new form of deep learning
link |
00:42:25.920
that incorporates that stuff
link |
00:42:27.280
and I think it will help a lot.
link |
00:42:28.480
And there's some tentative work on things
link |
00:42:30.240
like differentiable programming right now
link |
00:42:32.160
that fall into that category.
link |
00:42:34.200
But there's sort of classic stuff
link |
00:42:35.480
like people use for ImageNet, doesn't have it.
link |
00:42:38.760
And you have people like Hinton going around
link |
00:42:40.480
and saying symbol manipulation like what Marcus,
link |
00:42:42.960
what I advocate is like the gasoline engine.
link |
00:42:45.760
It's obsolete.
link |
00:42:46.600
We should just use this cool electric power
link |
00:42:48.920
that we've got with the deep learning.
link |
00:42:50.400
And that's really destructive
link |
00:42:52.080
because we really do need to have the gasoline engine stuff
link |
00:42:56.000
that represents, I mean, I don't think it's a good analogy,
link |
00:42:59.680
but we really do need to have the stuff
link |
00:43:02.280
that represents symbols.
link |
00:43:03.760
Yeah, and Hinton as well would say that
link |
00:43:06.520
we do need to throw out everything and start over.
link |
00:43:09.040
So I mean, there is a question.
link |
00:43:10.600
Yeah, Hinton said that to Axios
link |
00:43:12.840
and I had a friend who interviewed him
link |
00:43:15.520
and tried to pin him down on what exactly we need to throw
link |
00:43:17.800
and he was very evasive.
link |
00:43:19.880
Well, of course, because we can't,
link |
00:43:21.640
if he knew that he'd throw it out himself,
link |
00:43:23.880
but I mean, he can't have it both ways.
link |
00:43:25.400
He can't be like, I don't know what to throw out,
link |
00:43:27.520
but I am gonna throw out the symbols.
link |
00:43:29.960
I mean, and not just the symbols,
link |
00:43:32.120
but the variables and the operations over variables.
link |
00:43:34.080
Don't forget the operations over variables,
link |
00:43:36.120
the stuff that I'm endorsing
link |
00:43:37.760
and which John McCarthy did when he founded AI.
link |
00:43:41.520
That stuff is the stuff that we build most computers out of.
link |
00:43:44.200
There are people now who say,
link |
00:43:45.440
we don't need computer programmers anymore.
link |
00:43:48.800
Not quite looking at the statistics
link |
00:43:50.280
of how much computer programmers actually get paid right now.
link |
00:43:53.000
We need lots of computer programmers
link |
00:43:54.440
and most of them, they do a little bit of machine learning,
link |
00:43:57.800
but they still do a lot of code, right?
link |
00:43:59.920
Code where it's like, if the value of X is greater
link |
00:44:02.640
than the value of Y, then do this kind of thing,
link |
00:44:04.520
like conditionals and comparing operations over variables.
link |
00:44:08.080
Like there's this fantasy, you can machine learn anything.
link |
00:44:10.200
There's some things you would never wanna machine learn.
link |
00:44:12.520
I would not use a phone operating system
link |
00:44:14.960
that was machine learned.
link |
00:44:16.080
Like you made a bunch of phone calls
link |
00:44:17.760
and you recorded which packets were transmitted
link |
00:44:19.720
and you just machine learned it, it'd be insane.
link |
00:44:22.480
Or to build a web browser by taking logs of keystrokes
link |
00:44:27.440
and images, screenshots,
link |
00:44:29.440
and then trying to learn the relation between them.
link |
00:44:31.480
Nobody would ever, no rational person
link |
00:44:33.840
would ever try to build a browser that way.
link |
00:44:35.920
They would use symbol manipulation,
link |
00:44:37.440
the stuff that I think AI needs to avail itself of
link |
00:44:40.080
in addition to deep learning.
link |
00:44:42.080
Can you describe what your view of symbol manipulation
link |
00:44:46.480
in its early days?
link |
00:44:47.880
Can you describe expert systems
link |
00:44:49.520
and where do you think they hit a wall
link |
00:44:52.520
or a set of challenges?
link |
00:44:53.920
Sure, so I mean, first I just wanna clarify.
link |
00:44:56.560
I'm not endorsing expert systems per se.
link |
00:44:58.920
You've been kind of contrasting them.
link |
00:45:00.720
There is a contrast,
link |
00:45:01.560
but that's not the thing that I'm endorsing.
link |
00:45:03.240
Yes.
link |
00:45:04.200
So expert systems try to capture things
link |
00:45:06.480
like medical knowledge with a large set of rules.
link |
00:45:09.440
So if the patient has this symptom and this other symptom,
link |
00:45:12.800
then it is likely that they have this disease.
link |
00:45:15.680
So there are logical rules
link |
00:45:16.840
and they were symbol manipulating rules of just the sort
link |
00:45:18.920
that I'm talking about.
link |
00:45:20.920
And the problem. They encode a set of knowledge
link |
00:45:23.400
that the experts then put in.
link |
00:45:24.960
And very explicitly so.
link |
00:45:26.240
So you'd have somebody interview an expert
link |
00:45:28.760
and then try to turn that stuff into rules.
link |
00:45:31.880
And at some level I'm arguing for rules,
link |
00:45:33.920
but the difference is those guys did in the 80s
link |
00:45:37.640
was almost entirely rules,
link |
00:45:39.960
almost entirely handwritten with no machine learning.
link |
00:45:42.920
What a lot of people are doing now
link |
00:45:44.280
is almost entirely one species of machine learning
link |
00:45:47.320
with no rules.
link |
00:45:48.240
And what I'm counseling is actually a hybrid.
link |
00:45:50.320
I'm saying that both of these things have their advantage.
link |
00:45:52.880
So if you're talking about perceptual classification,
link |
00:45:55.280
how do I recognize a bottle?
link |
00:45:57.080
Deep learning is the best tool we've got right now.
link |
00:45:59.480
If you're talking about making inferences
link |
00:46:00.880
about what a bottle does,
link |
00:46:02.360
something closer to the expert systems
link |
00:46:04.080
is probably still the best available alternative.
link |
00:46:07.280
And probably we want something that is better able
link |
00:46:09.800
to handle quantitative and statistical information
link |
00:46:12.560
than those classical systems typically were.
link |
00:46:14.880
So we need new technologies
link |
00:46:16.920
that are gonna draw some of the strengths
link |
00:46:18.560
of both the expert systems and the deep learning,
link |
00:46:21.000
but are gonna find new ways to synthesize them.
link |
00:46:23.200
How hard do you think it is to add knowledge at the low level?
link |
00:46:27.680
So mine human intellects to add extra information
link |
00:46:32.120
to symbol manipulating systems.
link |
00:46:36.520
In some domains, it's not that hard,
link |
00:46:37.840
but it's often really hard.
link |
00:46:40.080
Partly because a lot of the things that are important,
link |
00:46:44.120
people wouldn't bother to tell you.
link |
00:46:46.080
So if you pay someone on Amazon Mechanical Turk
link |
00:46:49.680
to tell you stuff about bottles,
link |
00:46:52.080
they probably won't even bother to tell you
link |
00:46:55.080
some of the basic level stuff
link |
00:46:57.040
that's just so obvious to a human being
link |
00:46:59.160
and yet so hard to capture in machines.
link |
00:47:03.840
You know, they're gonna tell you more exotic things
link |
00:47:06.560
and like they're all well and good,
link |
00:47:08.960
but they're not getting to the root of the problem.
link |
00:47:12.480
So untutored humans aren't very good at knowing
link |
00:47:16.520
and why should they be,
link |
00:47:18.360
what kind of knowledge the computer system developers
link |
00:47:22.280
actually need.
link |
00:47:23.480
I don't think that that's an irremediable problem.
link |
00:47:26.640
I think it's historically been a problem.
link |
00:47:28.640
People have had crowdsourcing efforts
link |
00:47:31.080
and they don't work that well.
link |
00:47:32.040
There's one at MIT.
link |
00:47:32.960
We're recording this at MIT called Virtual Home
link |
00:47:36.520
where, and we talk about this in the book.
link |
00:47:39.560
Find the exact example there,
link |
00:47:40.720
but people were asked to do things
link |
00:47:42.800
like describe an exercise routine.
link |
00:47:44.880
And the things that the people describe it
link |
00:47:47.560
are very low level and don't really capture what's going on.
link |
00:47:50.080
So they're like, go to the room with the television
link |
00:47:53.120
and the weights, turn on the television,
link |
00:47:56.120
press the remote to turn on the television,
link |
00:47:59.040
lift weight, put weight down,
link |
00:48:01.480
it's like very micro level.
link |
00:48:03.640
And it's not telling you what an exercise routine
link |
00:48:06.120
is really about, which is like,
link |
00:48:07.960
I wanna fit a certain number of exercises
link |
00:48:09.920
in a certain time period,
link |
00:48:11.000
I wanna emphasize these muscles.
link |
00:48:12.720
You want some kind of abstract description.
link |
00:48:15.120
The fact that you happen to press the remote control
link |
00:48:17.280
in this room when you watch this television
link |
00:48:20.040
isn't really the essence of the exercise routine,
link |
00:48:23.080
but if you just ask people like, what did they do?
link |
00:48:24.800
Then they give you this fine grain.
link |
00:48:27.000
And so it takes a little level of expertise
link |
00:48:29.800
about how the AI works in order to craft
link |
00:48:33.640
the right kind of knowledge.
link |
00:48:34.480
So there's this ocean of knowledge
link |
00:48:36.200
that we all operate on.
link |
00:48:37.600
Some of it may not even be conscious,
link |
00:48:39.360
or at least we're not able to communicate it effectively.
link |
00:48:43.280
Yeah, most of it we would recognize if somebody said it,
link |
00:48:45.720
if it was true or not,
link |
00:48:47.440
but we wouldn't think to say that it's true or not.
link |
00:48:49.680
It's a really interesting mathematical property.
link |
00:48:53.080
This ocean has the property that every piece
link |
00:48:55.480
of knowledge in it,
link |
00:48:56.720
we will recognize it as true if we're told,
link |
00:48:59.960
but we're unlikely to retrieve it in the reverse.
link |
00:49:04.120
So that interesting property,
link |
00:49:07.200
I would say there's a huge ocean of that knowledge.
link |
00:49:10.600
What's your intuition?
link |
00:49:11.600
Is it accessible to AI systems somehow?
link |
00:49:14.680
Can we, so you said,
link |
00:49:16.680
I mean, most of it is not,
link |
00:49:18.760
well, I'll give you an asterisk on this in a second,
link |
00:49:20.520
but most of it is not ever been encoded
link |
00:49:23.280
in machine interpretable form.
link |
00:49:25.720
And so, I mean, if you say accessible,
link |
00:49:27.320
there's two meanings of that.
link |
00:49:28.680
One is like, could you build it into a machine?
link |
00:49:31.600
Yes.
link |
00:49:32.440
The other is like, is there some database
link |
00:49:34.480
that we could go download and stick into our machine?
link |
00:49:38.440
But the first thing, no.
link |
00:49:39.520
Could we?
link |
00:49:40.520
Is what's your intuition?
link |
00:49:41.360
I think we could.
link |
00:49:42.200
I think it hasn't been done right.
link |
00:49:45.200
The closest, and this is the asterisk,
link |
00:49:47.320
is the CYC psych system, try to do this.
link |
00:49:51.200
A lot of logicians worked for Doug Lennon
link |
00:49:53.080
for 30 years on this project.
link |
00:49:55.480
I think they stuck too closely to logic,
link |
00:49:57.920
didn't represent enough about probabilities,
link |
00:50:00.240
tried to hand code it, there are various issues,
link |
00:50:02.200
and it hasn't been that successful.
link |
00:50:04.520
That is the closest existing system
link |
00:50:08.520
to trying to encode this.
link |
00:50:10.640
Why do you think there's not more excitement
link |
00:50:13.480
slash money behind this idea currently?
link |
00:50:16.440
There was, people view that project as a failure.
link |
00:50:19.160
I think that they confused the failure of a specific instance
link |
00:50:23.160
that was conceived 30 years ago for the failure of an approach,
link |
00:50:26.160
which they don't do for deep learning.
link |
00:50:28.120
So in 2010, people had the same attitude towards deep learning.
link |
00:50:32.680
They're like, this stuff doesn't really work.
link |
00:50:35.480
And all these other algorithms work better and so forth.
link |
00:50:39.120
And then certain key technical advances were made.
link |
00:50:41.840
But mostly, it was the advent of graphics processing units
link |
00:50:45.040
that changed that.
link |
00:50:46.400
It wasn't even anything foundational in the techniques.
link |
00:50:50.040
And there were some new tricks.
link |
00:50:51.200
But mostly, it was just more compute and more data,
link |
00:50:55.280
things like ImageNet that didn't exist before,
link |
00:50:57.880
that allowed deep learning.
link |
00:50:59.040
And it could be to work.
link |
00:51:00.880
It could be that psych just needs a few more things
link |
00:51:03.760
or something like psych.
link |
00:51:05.440
But the widespread view is that that just doesn't work.
link |
00:51:08.840
And people are reasoning from a single example.
link |
00:51:11.760
They don't do that with deep learning.
link |
00:51:13.240
They don't say nothing that existed in 2010.
link |
00:51:16.600
And there were many, many efforts in deep learning
link |
00:51:18.880
was really worth anything.
link |
00:51:20.600
I mean, really, there's no model from 2010
link |
00:51:23.840
in deep learning that has any commercial value whatsoever
link |
00:51:28.440
at this point.
link |
00:51:29.640
They're all failures.
link |
00:51:31.360
But that doesn't mean that there wasn't anything there.
link |
00:51:33.520
I have a friend who I was getting to know him.
link |
00:51:35.960
And he said, I had a company, too.
link |
00:51:38.840
I was talking about I had a new company.
link |
00:51:40.640
And he said, I had a company, too, and it failed.
link |
00:51:43.400
And I said, well, what did you do?
link |
00:51:44.320
And he said, deep learning.
link |
00:51:45.680
And the problem was he did it in 1986 or something like that.
link |
00:51:48.680
And we didn't have the tools then or 1990.
link |
00:51:51.120
We didn't have the tools then, not the algorithms.
link |
00:51:53.960
His algorithms weren't that different from other algorithms.
link |
00:51:56.560
But he didn't have the GPUs to run it fast enough.
link |
00:51:58.480
He didn't have the data.
link |
00:51:59.720
And so it failed.
link |
00:52:01.360
It could be that symbol manipulation, per se,
link |
00:52:06.920
with modern amounts of data and compute
link |
00:52:09.560
and maybe some advance in compute for that kind of compute,
link |
00:52:13.720
might be great.
link |
00:52:14.880
My perspective on it is not that we
link |
00:52:18.440
want to resuscitate that stuff, per se,
link |
00:52:20.000
but we want to borrow lessons from it, bring together
link |
00:52:22.040
with other things that we've learned.
link |
00:52:23.480
And it might have an ImageNet moment where it will spark
link |
00:52:27.120
the world's imagination.
link |
00:52:28.200
And there will be an explosion of symbol manipulation efforts.
link |
00:52:31.480
Yeah, I think that people at AI2, the Paul Allen AI Institute,
link |
00:52:35.720
are trying to build data sets that, well,
link |
00:52:39.400
they're not doing it for quite the reason that you say,
link |
00:52:41.120
but they're trying to build data sets that at least
link |
00:52:43.600
spark interest in common sense reasoning.
link |
00:52:45.400
To create benchmarks that people are thinking.
link |
00:52:46.800
Benchmarks for common sense, that's
link |
00:52:48.400
a large part of what the AI2.org is working on right now.
link |
00:52:52.040
So speaking of compute, Rich Sutton
link |
00:52:54.320
wrote a blog post titled Bitter Lesson.
link |
00:52:56.400
I don't know if you've read it, but he said that the biggest
link |
00:52:58.800
lesson that can be read from 70 years of AI research
link |
00:53:01.560
is that general methods that leverage computation
link |
00:53:04.200
are ultimately the most effective.
link |
00:53:06.400
Do you think that?
link |
00:53:07.000
The most effective of what?
link |
00:53:08.880
So they have been most effective for perceptual classification
link |
00:53:13.360
problems and for some reinforcement learning problems.
link |
00:53:18.040
He works on reinforcement learning.
link |
00:53:19.400
Well, no, let me push back on that.
link |
00:53:20.720
You're actually absolutely right.
link |
00:53:22.840
But I would also say they've been most effective generally
link |
00:53:28.120
because everything we've done up to the point.
link |
00:53:31.520
Would you argue against that?
link |
00:53:33.560
To me, deep learning is the first thing
link |
00:53:36.280
that has been successful at anything in AI.
link |
00:53:42.200
And you're pointing out that this success is very limited,
link |
00:53:46.280
folks.
link |
00:53:47.120
But has there been something truly successful
link |
00:53:50.280
before deep learning?
link |
00:53:51.680
Sure.
link |
00:53:53.680
I want to make a larger point.
link |
00:53:54.880
But on the narrower point, classical AI
link |
00:53:59.640
is used, for example, in doing navigation instructions.
link |
00:54:04.560
It's very successful.
link |
00:54:06.040
Everybody on the planet uses it now, like multiple times a day.
link |
00:54:09.440
That's a measure of success, right?
link |
00:54:12.240
So I don't think classical AI was wildly successful.
link |
00:54:16.080
But there are cases like that that is used all the time.
link |
00:54:19.160
Nobody even notices them because they're so pervasive.
link |
00:54:23.760
So there are some successes for classical AI.
link |
00:54:26.480
I think deep learning has been more successful.
link |
00:54:28.680
But my usual line about this, and I didn't invent it,
link |
00:54:32.040
but I like it a lot, is just because you
link |
00:54:33.760
can build a better ladder doesn't mean
link |
00:54:35.560
you can build a ladder to the moon.
link |
00:54:37.200
So the bitter lesson is if you have a perceptual classification
link |
00:54:41.000
problem, throwing a lot of data at it
link |
00:54:43.800
is better than anything else.
link |
00:54:45.760
But that has not given us any material progress
link |
00:54:50.000
in natural language understanding,
link |
00:54:51.880
common sense reasoning like a robot would
link |
00:54:53.960
need to navigate a home.
link |
00:54:56.240
Problems like that, there is no actual progress there.
link |
00:54:59.440
So flip side of that, if we remove data from the picture,
link |
00:55:02.240
another bitter lesson is that you just have a very simple
link |
00:55:09.120
algorithm and you wait for compute to scale.
link |
00:55:12.240
This doesn't have to be learning.
link |
00:55:13.520
It doesn't have to be deep learning.
link |
00:55:14.840
It doesn't have to be data driven,
link |
00:55:16.360
but just wait for the compute.
link |
00:55:18.240
So my question for you, do you think
link |
00:55:19.880
compute can unlock some of the things
link |
00:55:21.640
with either deep learning or simple manipulation that?
link |
00:55:25.440
Sure, but I'll put a proviso on that.
link |
00:55:29.840
More compute's always better, like nobody's
link |
00:55:32.440
going to argue with more compute.
link |
00:55:33.640
It's like having more money.
link |
00:55:34.720
I mean, there's the data.
link |
00:55:36.080
There's diminishing returns on more money.
link |
00:55:37.480
Exactly.
link |
00:55:37.980
There's diminishing returns on more money,
link |
00:55:39.760
but nobody's going to argue if you
link |
00:55:41.280
want to give them more money, right?
link |
00:55:42.680
Except maybe the people who signed the giving pledge,
link |
00:55:44.680
and some of them have a problem.
link |
00:55:46.120
They have problems to give away more money
link |
00:55:48.040
than they're able to.
link |
00:55:49.720
But the rest of us, if you want to give me more money, fine.
link |
00:55:52.520
Say more money, more problems, but OK.
link |
00:55:54.600
That's true too.
link |
00:55:55.880
What I would say to you is your brain uses like 20 watts,
link |
00:56:00.120
and it does a lot of things that deep learning doesn't do,
link |
00:56:02.720
or that simple manipulation doesn't do,
link |
00:56:04.720
that AI just hasn't figured out how to do.
link |
00:56:07.040
So it's an existence proof that you
link |
00:56:09.440
don't need server resources that are Google scale in order
link |
00:56:14.240
to have an intelligence.
link |
00:56:16.120
I built, with a lot of help from my wife,
link |
00:56:18.920
two intelligences that are 20 watts each
link |
00:56:21.680
and far exceed anything that anybody else has built at a silicon.
link |
00:56:27.320
Speaking of those two robots, what
link |
00:56:30.280
have you learned about AI from having?
link |
00:56:33.280
Well, they're not robots, but.
link |
00:56:35.320
Sorry, intelligent agents.
link |
00:56:36.800
There's two intelligent agents.
link |
00:56:38.160
I've learned a lot by watching my two intelligent agents.
link |
00:56:42.760
I think that what's fundamentally interesting,
link |
00:56:45.840
well, one of the many things that's fundamentally interesting
link |
00:56:48.000
about them is the way that they set their own problems
link |
00:56:50.800
to solve.
link |
00:56:52.040
So my two kids are a year and a half apart.
link |
00:56:54.560
They're both five and six and a half.
link |
00:56:56.480
They play together all the time, and they're constantly
link |
00:56:59.560
creating new challenges.
link |
00:57:00.840
Like that's what they do, is they make up games,
link |
00:57:03.840
and they're like, well, what if this, or what if that,
link |
00:57:05.960
or what if I had this superpower,
link |
00:57:07.880
or what if you could walk through this wall.
link |
00:57:10.400
So they're doing these what if scenarios all the time.
link |
00:57:14.120
And that's how they learn something about the world
link |
00:57:17.600
and grow their minds, and machines don't really do that.
link |
00:57:22.640
So that's interesting.
link |
00:57:23.680
And you've talked about this, you've written about it,
link |
00:57:25.280
you thought about it, nature versus nurture.
link |
00:57:29.320
So what innate knowledge do you think we're born with?
link |
00:57:33.640
And what do we learn along the way
link |
00:57:35.600
in those early months and years?
link |
00:57:38.320
Can I just say how much I like that question?
link |
00:57:41.600
You phrased it just right, and almost nobody ever does.
link |
00:57:45.840
Which is what is the innate knowledge
link |
00:57:47.280
in what's learned along the way.
link |
00:57:49.280
So many people that catamize it,
link |
00:57:51.240
and they think it's nature versus nurture.
link |
00:57:53.480
When it is obviously has to be nature and nurture,
link |
00:57:56.840
they have to work together.
link |
00:57:58.640
You can't learn the stuff along the way
link |
00:58:00.560
unless you have some innate stuff.
link |
00:58:02.400
But just because you have the innate stuff
link |
00:58:03.960
doesn't mean you don't learn anything.
link |
00:58:05.920
And so many people get that wrong, including in the field.
link |
00:58:09.320
Like people think, if I work in machine learning,
link |
00:58:12.280
the learning side, I must not be allowed to work
link |
00:58:15.360
on the innate side where that will be cheating.
link |
00:58:17.360
Exactly, people have said that to me.
link |
00:58:19.680
And it's just absurd.
link |
00:58:21.680
So thank you.
link |
00:58:23.440
But you could break that apart more.
link |
00:58:25.240
I've talked to folks who studied
link |
00:58:26.640
the development of the brain.
link |
00:58:28.320
And I mean, the growth of the brain
link |
00:58:30.760
in the first few days, in the first few months,
link |
00:58:35.000
in the womb, all of that, is that innate?
link |
00:58:39.600
So that process of development from a stem cell
link |
00:58:42.400
to the growth, the central nervous system and so on,
link |
00:58:45.480
to the information that's encoded
link |
00:58:49.360
through the long arc of evolution.
link |
00:58:52.360
So all of that comes into play and it's unclear.
link |
00:58:55.400
It's not just whether it's the dichotomy or not.
link |
00:58:57.400
It's where most, or where the knowledge is encoded.
link |
00:59:02.160
So what's your intuition about the innate knowledge,
link |
00:59:07.720
the power of it, what's contained in it?
link |
00:59:09.760
What can we learn from it?
link |
00:59:11.440
One of my earlier books was actually
link |
00:59:12.600
trying to understand the biology of this.
link |
00:59:14.040
The book was called The Birth of the Mind.
link |
00:59:15.880
Like how is it the genes even build innate knowledge?
link |
00:59:18.920
And from the perspective of the conversation
link |
00:59:21.480
we're having today, there's actually two questions.
link |
00:59:23.640
One is what innate knowledge or mechanisms
link |
00:59:26.520
or what have you?
link |
00:59:28.320
People or other animals might be endowed with,
link |
00:59:30.920
I always like showing this video
link |
00:59:32.280
of a baby Ibex climbing down a mountain.
link |
00:59:34.640
That baby Ibex a few hours after his birth
link |
00:59:37.400
knows how to climb down a mountain.
link |
00:59:38.440
That means that it knows, not consciously,
link |
00:59:40.960
something about its own body and physics
link |
00:59:43.040
and 3D geometry and all of this kind of stuff.
link |
00:59:47.520
So there's one question about like what does biology
link |
00:59:49.720
give its creatures and what has evolved in our brains?
link |
00:59:53.240
How is that represented in our brains?
link |
00:59:55.000
The question I thought about in the book,
link |
00:59:56.200
The Birth of the Mind.
link |
00:59:57.360
And then there's a question of what AI should have.
link |
00:59:59.320
And they don't have to be the same.
link |
01:00:01.600
But I would say that it's a pretty interesting set
link |
01:00:07.240
of things that we are equipped with
link |
01:00:08.720
that allows us to do a lot of interesting things.
link |
01:00:10.520
So I would argue or guess based on my reading
link |
01:00:13.760
of the developmental psychology literature,
link |
01:00:15.280
which I've also participated in,
link |
01:00:18.040
that children are born with a notion of space, time,
link |
01:00:22.600
other agents, places,
link |
01:00:25.760
and also this kind of mental algebra
link |
01:00:27.680
that I was describing before.
link |
01:00:30.280
No certain of causation if I didn't just say that.
link |
01:00:33.120
So at least those kinds of things.
link |
01:00:35.680
They're like frameworks for learning the other things.
link |
01:00:38.800
So are they disjoint in your view?
link |
01:00:40.400
Or is it just somehow all connected?
link |
01:00:42.920
You've talked a lot about language.
link |
01:00:44.400
Is it all kind of connected in some mesh
link |
01:00:48.000
that's language like if understanding concepts altogether?
link |
01:00:52.640
Or I don't think we know for people
link |
01:00:54.880
how they're represented in machines
link |
01:00:56.280
just don't really do this yet.
link |
01:00:58.200
So I think it's an interesting open question
link |
01:01:00.600
both for science and for engineering.
link |
01:01:03.600
Some of it has to be at least interrelated
link |
01:01:06.400
in the way that the interfaces of a software package
link |
01:01:10.240
have to be able to talk to one another.
link |
01:01:12.200
So the systems that represent space and time
link |
01:01:16.680
can't be totally disjoint
link |
01:01:18.320
because a lot of the things that we reason about
link |
01:01:20.760
are the relations between space and time and cause.
link |
01:01:23.040
So I put this on and I have expectations
link |
01:01:26.480
about what's gonna happen with the bottle cap
link |
01:01:28.040
on top of the bottle.
link |
01:01:29.520
And those span space and time.
link |
01:01:32.600
If the cap is over here, I get a different outcome.
link |
01:01:35.760
If the timing is different, if I put this here
link |
01:01:38.560
after I move that, then I get a different outcome
link |
01:01:41.920
that relates to causality.
link |
01:01:43.080
So obviously these mechanisms, whatever they are,
link |
01:01:47.880
can certainly communicate with each other.
link |
01:01:50.080
So I think evolution had a significant role
link |
01:01:53.200
to play in the development of this whole collage, right?
link |
01:01:57.120
How efficient do you think is evolution?
link |
01:01:59.240
Oh, it's terribly inefficient, except that.
link |
01:02:01.960
Well, can we do better?
link |
01:02:03.080
Well, let's come to that in a second.
link |
01:02:05.760
It's inefficient except that once it gets a good idea,
link |
01:02:09.440
it runs with it.
link |
01:02:10.880
So it took, I guess a billion years,
link |
01:02:15.680
roughly a billion years to evolve to a vertebrate brain plan.
link |
01:02:24.040
And once that vertebrate plan evolved,
link |
01:02:26.920
it spread everywhere.
link |
01:02:28.480
So fish have it and dogs have it and we have it.
link |
01:02:31.680
We have adaptations of it and specializations of it.
link |
01:02:34.720
And the same thing with a primate brain plan.
link |
01:02:37.160
So monkeys have it and apes have it and we have it.
link |
01:02:41.120
So there are additional innovations like color vision
link |
01:02:43.760
and those spread really rapidly.
link |
01:02:45.880
So it takes evolution a long time to get a good idea,
link |
01:02:48.840
but being anthropomorphic and not literal here,
link |
01:02:53.280
but once it has that idea, so to speak,
link |
01:02:55.600
which caches out into one set of genes or in the genome,
link |
01:02:58.560
those genes spread very rapidly
link |
01:03:00.520
and they're like subroutines or libraries,
link |
01:03:02.640
I guess the word people might use nowadays
link |
01:03:04.560
or be more familiar with,
link |
01:03:05.640
they're libraries that can get used over and over again.
link |
01:03:08.800
So once you have the library for building something
link |
01:03:11.760
with multiple digits, you can use it for a hand,
link |
01:03:13.840
but you can also use it for a foot.
link |
01:03:15.520
You just kind of reuse the library
link |
01:03:17.400
with slightly different parameters.
link |
01:03:19.080
Evolution does a lot of that,
link |
01:03:20.640
which means that the speed over time picks up.
link |
01:03:23.480
So evolution can happen faster
link |
01:03:25.560
because you have bigger and bigger libraries.
link |
01:03:28.400
And what I think has happened in attempts
link |
01:03:32.240
at evolutionary computation is that people start
link |
01:03:35.760
with libraries that are very, very minimal,
link |
01:03:40.360
like almost nothing and then progress is slow
link |
01:03:44.280
and it's hard for someone to get a good PhD thesis
link |
01:03:46.640
out of it and they give up.
link |
01:03:48.280
If we had richer libraries to begin with,
link |
01:03:50.280
if you were evolving from systems
link |
01:03:52.640
that hadn't originate structure to begin with,
link |
01:03:55.360
then things might speed up.
link |
01:03:56.800
Or more PhD students, if the evolutionary process is indeed
link |
01:04:00.880
in a meta way, runs away with good ideas,
link |
01:04:04.240
you need to have a lot of ideas,
link |
01:04:06.720
pool of ideas in order for it to discover one
link |
01:04:08.840
that you can run away with.
link |
01:04:10.240
And PhD students representing individual ideas as well.
link |
01:04:13.200
Yeah, I mean, you could throw a billion PhD students at it.
link |
01:04:16.240
Yeah, the monkeys at typewriters with Shakespeare, yep.
link |
01:04:20.160
Well, I mean, those aren't cumulative, right?
link |
01:04:22.080
That's just random.
link |
01:04:23.440
And part of the point that I'm making
link |
01:04:24.960
is that evolution is cumulative.
link |
01:04:26.800
So if you have a billion monkeys independently,
link |
01:04:31.160
you don't really get anywhere.
link |
01:04:32.440
But if you have a billion monkeys,
link |
01:04:33.840
and I think Dawkins made this point originally,
link |
01:04:35.720
or probably other people,
link |
01:04:36.560
Dawkins made it very nice
link |
01:04:37.600
and either a selfish gene or blind watchmaker.
link |
01:04:41.320
If there is some sort of fitness function
link |
01:04:44.080
that can drive you towards something,
link |
01:04:45.680
I guess that's Dawkins point.
link |
01:04:47.120
And my point, which is a little variation on that,
link |
01:04:49.440
is that if the evolution is cumulative,
link |
01:04:52.120
the related points, then you can start going faster.
link |
01:04:55.600
Do you think something like the process of evolution
link |
01:04:57.760
is required to build intelligent systems?
link |
01:05:00.160
So if we...
link |
01:05:01.000
Not logically.
link |
01:05:01.840
So all the stuff that evolution did,
link |
01:05:04.040
a good engineer might be able to do.
link |
01:05:07.040
So for example, evolution made quadrupeds,
link |
01:05:10.560
which distribute the load across a horizontal surface.
link |
01:05:14.200
A good engineer could come up with that idea.
link |
01:05:17.000
I mean, sometimes good engineers come up with ideas
link |
01:05:18.720
by looking at biology.
link |
01:05:19.760
There's lots of ways to get your ideas.
link |
01:05:22.080
And part of what I'm suggesting
link |
01:05:23.680
is we should look at biology a lot more.
link |
01:05:26.000
We should look at the biology of thought
link |
01:05:29.280
and understanding the biology
link |
01:05:31.720
by which creatures intuitively reason about physics
link |
01:05:35.000
or other agents or like,
link |
01:05:36.240
how do dogs reason about people?
link |
01:05:37.960
Like they're actually pretty good at it.
link |
01:05:39.680
If we could understand...
link |
01:05:41.840
At my college, we joked, dognition.
link |
01:05:44.040
If we could understand dognition well,
link |
01:05:46.320
then how it was implemented,
link |
01:05:47.720
that might help us with our AI.
link |
01:05:49.800
So do you think it's possible
link |
01:05:53.800
that the kind of timescale that evolution took
link |
01:05:57.200
is the kind of timescale that will be needed
link |
01:05:58.960
to build intelligent systems?
link |
01:06:00.520
Or can we significantly accelerate that process
link |
01:06:02.960
inside a computer?
link |
01:06:05.440
I mean, I think the way that we accelerate that process
link |
01:06:07.560
is we borrow from biology.
link |
01:06:10.640
Not slavishly, but I think we look at how biology
link |
01:06:14.280
has solved problems and we say,
link |
01:06:15.640
does that inspire any engineering solutions here?
link |
01:06:18.880
And try to mimic biological systems
link |
01:06:20.720
and then therefore have a shortcut?
link |
01:06:22.400
Yeah, I mean, there's a field called biomimicry
link |
01:06:25.040
and people do that for like material science all the time.
link |
01:06:29.000
We should be doing the analog of that for AI.
link |
01:06:32.960
And the analog for that for AI
link |
01:06:34.480
is to look at cognitive science
link |
01:06:35.800
or the cognitive sciences,
link |
01:06:37.040
which is psychology, maybe neuroscience, linguistics
link |
01:06:40.320
and so forth, look to those for insight.
link |
01:06:43.480
What do you think is a good test of intelligence
link |
01:06:45.360
in your view?
link |
01:06:46.700
I don't think there's one good test.
link |
01:06:48.520
In fact, I try to organize a movement
link |
01:06:51.840
towards something called a Turing Olympics.
link |
01:06:53.400
And my hope is that Francois is actually gonna take,
link |
01:06:56.200
Francois Chalet is gonna take over this.
link |
01:06:58.280
I think he's interested in that.
link |
01:06:59.960
I just don't have a place in my busy life at this moment.
link |
01:07:03.480
But the notion is that there'd be many tests
link |
01:07:06.440
and not just one because intelligence is multifaceted.
link |
01:07:09.520
There can't really be a single measure of it
link |
01:07:12.920
because it isn't a single thing.
link |
01:07:15.640
Like just the crudest level,
link |
01:07:17.360
the SAT is a verbal component and a math component
link |
01:07:19.880
because they're not identical.
link |
01:07:21.360
And Howard Gardner has talked about multiple intelligence,
link |
01:07:23.640
like kinesthetic intelligence
link |
01:07:25.440
and verbal intelligence and so forth.
link |
01:07:27.760
There are a lot of things that go into intelligence
link |
01:07:29.960
and people can get good at one or the other.
link |
01:07:32.560
I mean, in some sense, like every expert
link |
01:07:34.720
has developed a very specific kind of intelligence.
link |
01:07:37.240
And then there are people that are generalists.
link |
01:07:39.280
And I think of myself as a generalist
link |
01:07:41.760
with respect to cognitive science,
link |
01:07:43.400
which doesn't mean I know anything about quantum mechanics,
link |
01:07:45.640
but I know a lot about the different facets of the mind.
link |
01:07:49.240
And there's a kind of intelligence
link |
01:07:51.360
to thinking about intelligence.
link |
01:07:52.680
I like to think that I have some of that,
link |
01:07:54.760
but social intelligence, I'm just okay.
link |
01:07:57.480
There are people that are much better at that than I am.
link |
01:08:00.160
Sure, but what would be really impressive to you?
link |
01:08:04.120
I think the idea of a touring Olympics is really interesting,
link |
01:08:07.080
especially if somebody like Francois is running it.
link |
01:08:09.680
But to you in general, not as a benchmark,
link |
01:08:14.400
but if you saw an AI system being able to accomplish something
link |
01:08:18.480
that would impress the heck out of you,
link |
01:08:21.720
what would that thing be?
link |
01:08:22.720
Would it be natural language conversation?
link |
01:08:24.680
For me personally, I would like to see a kind of comprehension
link |
01:08:29.520
that relates to what you just said.
link |
01:08:30.680
So I wrote a piece in the New Yorker in I think 2015,
link |
01:08:34.960
right after Eugene Gustman, which was a software package,
link |
01:08:39.960
won a version of the touring test.
link |
01:08:42.880
And the way that it did this is it'd be,
link |
01:08:45.160
well, the way you win the touring test,
link |
01:08:46.840
so called win it, is the touring test is you fool a person
link |
01:08:50.680
into thinking that a machine is a person.
link |
01:08:54.400
Is you're evasive, you pretend to have limitations
link |
01:08:57.960
so you don't have to answer certain questions and so forth.
link |
01:09:00.560
So this particular system
link |
01:09:02.680
pretended to be a 13 year old boy from Odessa
link |
01:09:05.280
who didn't understand English and was kind of sarcastic
link |
01:09:08.040
and wouldn't answer your questions and so forth.
link |
01:09:09.680
And so judges got fooled into thinking briefly
link |
01:09:12.480
with a very little exposure to the 13 year old boy
link |
01:09:14.680
and it docked all the questions the touring was actually
link |
01:09:17.120
interested in, which is like,
link |
01:09:17.960
how do you make the machine actually intelligent?
link |
01:09:20.440
So that test itself is not that good.
link |
01:09:22.120
And so in New Yorker, I proposed an alternative, I guess.
link |
01:09:26.080
And the one that I proposed there was a comprehension test.
link |
01:09:30.000
And I must like Breaking Bad,
link |
01:09:31.080
because I've already given you one Breaking Bad example
link |
01:09:32.920
and in that article I have one as well,
link |
01:09:35.640
which was something like if Walter,
link |
01:09:37.640
you should be able to watch an episode of Breaking Bad
link |
01:09:40.320
or maybe you have to watch the whole series
link |
01:09:41.680
to be able to answer the question and say,
link |
01:09:43.520
if Walter White took a hit out on Jesse,
link |
01:09:45.600
why did he do that?
link |
01:09:47.160
So if you could answer kind of arbitrary questions
link |
01:09:49.360
about characters motivations,
link |
01:09:51.240
I would be really impressed with that.
link |
01:09:52.920
I mean, he built software to do that.
link |
01:09:55.360
They could watch a film or they're different versions.
link |
01:09:58.480
And so ultimately I wrote this up with Praveen Paratosh
link |
01:10:01.920
in a special issue of AI magazine
link |
01:10:03.600
that basically was about the Turing Olympics.
link |
01:10:05.760
There were like 14 tests proposed.
link |
01:10:07.720
The one that I was pushing was a comprehension challenge
link |
01:10:10.080
and Praveen who's at Google was trying to figure out
link |
01:10:12.360
like how we would actually run it.
link |
01:10:13.440
And so we wrote a paper together.
link |
01:10:15.320
And you could have a text version too,
link |
01:10:17.280
or you could have an auditory podcast version,
link |
01:10:19.640
you could have a written version.
link |
01:10:20.560
But the point is that you win at this test
link |
01:10:23.720
if you can do let's say human level or better than humans
link |
01:10:26.960
at answering kind of arbitrary questions.
link |
01:10:29.520
You know, why did this person pick up the stone?
link |
01:10:31.600
What were they thinking when they picked up the stone?
link |
01:10:34.080
Were they trying to knock down glass?
link |
01:10:36.160
And I mean, ideally these wouldn't be multiple choice either
link |
01:10:38.640
because multiple choice is pretty easily gamed.
link |
01:10:41.040
So if you could have relatively open ended questions
link |
01:10:44.120
and you can answer why people are doing this stuff,
link |
01:10:47.440
I would be very impressed.
link |
01:10:48.280
And of course humans can do this, right?
link |
01:10:50.160
If you watch a well constructed movie
link |
01:10:52.920
and somebody picks up a rock,
link |
01:10:55.600
everybody watching the movie knows
link |
01:10:57.360
why they picked up the rock, right?
link |
01:10:59.520
They all know, oh my gosh, he's gonna hit this character
link |
01:11:03.000
or whatever.
link |
01:11:03.840
We have an example in the book about
link |
01:11:06.280
when a whole bunch of people say, I am Spartacus,
link |
01:11:08.720
you know this famous scene?
link |
01:11:11.800
The viewers understand, first of all,
link |
01:11:14.200
that everybody or everybody minus one has to be lying.
link |
01:11:19.080
They can't all be Spartacus.
link |
01:11:20.400
We have enough common sense knowledge
link |
01:11:21.840
to know they couldn't all have the same name.
link |
01:11:24.160
We know that they're lying
link |
01:11:25.400
and we can infer why they're lying, right?
link |
01:11:27.160
They're lying to protect someone
link |
01:11:28.520
and to protect things they believe in.
link |
01:11:30.360
You get a machine that can do that.
link |
01:11:32.400
They can say, this is why these guys all got up
link |
01:11:35.160
and said, I am Spartacus.
link |
01:11:37.000
I will sit down and say AI has really achieved a lot.
link |
01:11:40.560
Thank you.
link |
01:11:41.400
Without cheating any part of the system.
link |
01:11:43.880
Yeah, I mean, if you do it,
link |
01:11:45.640
there are lots of ways you can cheat.
link |
01:11:46.600
Like you could build a Spartacus machine
link |
01:11:48.840
that works on that film.
link |
01:11:50.120
Like that's not what I'm talking about.
link |
01:11:51.160
I'm talking about, you can do this
link |
01:11:52.880
with essentially arbitrary films from a large size.
link |
01:11:55.720
Even beyond films because it's possible
link |
01:11:57.680
such a system would discover
link |
01:11:59.000
that the number of narrative arcs in film
link |
01:12:02.600
is like limited to like 1930.
link |
01:12:04.040
There's a famous thing about the classic seven plots
link |
01:12:06.440
or whatever.
link |
01:12:07.280
I don't care if you want to build in the system,
link |
01:12:09.120
boy meets girl, boy loses girl, boy finds girl.
link |
01:12:11.680
That's fine.
link |
01:12:12.520
I don't mind having some headstone knowledge.
link |
01:12:14.560
Okay.
link |
01:12:15.400
Good.
link |
01:12:16.240
I mean, you could build it into Nathalie
link |
01:12:18.000
or you could have your system watch a lot of films again.
link |
01:12:20.480
If you can do this at all,
link |
01:12:22.400
but with a wide range of films,
link |
01:12:23.760
not just one film and one genre.
link |
01:12:27.320
But even if you could do it for all Westerns,
link |
01:12:28.880
I'd be reasonably impressed.
link |
01:12:30.320
Yeah.
link |
01:12:31.160
So in terms of being impressed,
link |
01:12:34.120
just for the fun of it,
link |
01:12:35.840
because you've put so many interesting ideas out there
link |
01:12:38.440
in your book,
link |
01:12:40.240
a challenge in the community for further steps,
link |
01:12:43.680
is it possible on the deep learning front
link |
01:12:46.720
that you're wrong about its limitations,
link |
01:12:50.240
that deep learning will unlock,
link |
01:12:52.280
Yanlacou next year will publish a paper
link |
01:12:54.480
that achieves this comprehension.
link |
01:12:56.880
So do you think that way often as a scientist,
link |
01:13:00.280
do you consider that your intuition
link |
01:13:03.040
that deep learning could actually run away with it?
link |
01:13:06.680
I'm more worried about rebranding
link |
01:13:09.720
as a kind of political thing.
link |
01:13:11.320
So I mean, what's gonna happen, I think,
link |
01:13:14.040
is that deep learning is gonna start to encompass
link |
01:13:16.400
simple manipulation.
link |
01:13:17.360
So I think Hinton's just wrong.
link |
01:13:19.200
Hinton says we don't want hybrids.
link |
01:13:20.840
I think people will work towards hybrids
link |
01:13:22.360
and they will relabel their hybrids as deep learning.
link |
01:13:24.680
We've already seen some of that.
link |
01:13:25.840
So AlphaGo is often described as a deep learning system,
link |
01:13:29.560
but it's more correctly described as a system
link |
01:13:31.720
that has deep learning, but also Monte Carlo Tree Search,
link |
01:13:33.920
which is a classical AI technique.
link |
01:13:35.640
And people will start to blur the lines
link |
01:13:37.560
in the way that IBM blurred Watson.
link |
01:13:39.840
First Watson meant this particular system
link |
01:13:41.600
and then it was just anything that IBM built
link |
01:13:43.160
in their cognitive division.
link |
01:13:44.200
But purely, let me ask for sure.
link |
01:13:45.800
That's a branding question and that's a giant mess.
link |
01:13:49.520
I mean purely a single neural network
link |
01:13:52.000
being able to accomplish reasoning and comprehension.
link |
01:13:54.080
I don't stay up at night
link |
01:13:55.360
worrying that that's gonna happen.
link |
01:13:57.840
And I'll just give you two examples.
link |
01:13:59.280
One is a guy at DeepMind
link |
01:14:01.680
thought he had finally outfoxed me at Xergy Lord,
link |
01:14:05.560
I think is his Twitter handle.
link |
01:14:08.040
And he specifically made an example.
link |
01:14:10.600
Marcus said that such and such, he fed it into GP2,
link |
01:14:14.920
which is the AI system that is so smart
link |
01:14:17.680
that OpenAI couldn't release it
link |
01:14:19.080
because it would destroy the world, right?
link |
01:14:21.200
You remember that a few months ago.
link |
01:14:22.960
So he feeds it into GPT2 and my example was something
link |
01:14:27.720
like a rose is a rose, a tulip is a tulip,
link |
01:14:30.360
a lily is a blank.
link |
01:14:31.360
And he got it to actually do that,
link |
01:14:32.880
which was a little bit impressive.
link |
01:14:34.040
And I wrote back and I said, that's impressive,
link |
01:14:35.400
but can I ask you a few questions?
link |
01:14:37.760
I said, was that just one example?
link |
01:14:40.080
Can it do it generally?
link |
01:14:41.680
And can it do it with novel words?
link |
01:14:43.280
Which is part of what I was talking about in 1998
link |
01:14:45.360
when I first raised the example.
link |
01:14:46.760
So a DAX is a DAX, right?
link |
01:14:50.360
And he sheepishly wrote back about 20 minutes later
link |
01:14:53.080
and the answer was, well, it had some problems with those.
link |
01:14:55.360
So I made some predictions 21 years ago
link |
01:14:58.840
that still hold in the world of computer science.
link |
01:15:01.960
That's amazing, right?
link |
01:15:02.800
Because there's a thousand or a million times more memory
link |
01:15:06.560
and computations a million times,
link |
01:15:10.080
do a million times more operations per second,
link |
01:15:13.240
spread across a cluster and there's been advances
link |
01:15:16.880
in replacing sigmoids with other functions and so forth.
link |
01:15:21.880
There's all kinds of advances,
link |
01:15:25.400
but the fundamental architecture hasn't changed
link |
01:15:27.120
and the fundamental limit hasn't changed.
link |
01:15:28.600
And what I said then is kind of still true.
link |
01:15:30.880
And then here's a second example.
link |
01:15:32.240
I recently had a piece in Wired that's adapted from the book
link |
01:15:35.240
and the book didn't, it was when to press before GP2 came out.
link |
01:15:40.120
But we describe this children's story
link |
01:15:42.280
and all the inferences that you make in this story
link |
01:15:45.600
about a boy finding a lost wallet.
link |
01:15:48.240
And for fun in the Wired piece, we ran it through GP2.
link |
01:15:52.840
GP2 at something called TalkToTransformer.com
link |
01:15:55.440
and your viewers can try this experiment themselves,
link |
01:15:58.160
go to the Wired piece that has the link and it has the story.
link |
01:16:01.080
And the system made perfectly fluent text
link |
01:16:04.280
that was totally inconsistent
link |
01:16:06.400
with the conceptual underpinnings of the story, right?
link |
01:16:10.240
And this is what, again, I predicted in 1998
link |
01:16:13.200
and for that matter, Chomsky Miller
link |
01:16:14.680
made the same prediction in 1963.
link |
01:16:16.640
I was just updating their claim for a slightly new text.
link |
01:16:19.400
So those particular architectures
link |
01:16:22.600
that don't have any built in knowledge,
link |
01:16:24.800
they're basically just a bunch of layers
link |
01:16:27.000
doing correlational stuff,
link |
01:16:28.920
they're not gonna solve these problems.
link |
01:16:31.240
So 20 years ago, you said the emperor has no clothes.
link |
01:16:34.520
Today, the emperor still has no clothes.
link |
01:16:36.840
The lighting's better though.
link |
01:16:38.040
The lighting is better.
link |
01:16:39.040
And I think you yourself are also, I mean.
link |
01:16:42.280
And we found out some things to do with Naked Emperors.
link |
01:16:44.360
I mean, it's not like stuff is worthless.
link |
01:16:46.520
I mean, they're not really Naked.
link |
01:16:48.360
It's more like they're in their briefs
link |
01:16:49.680
and everybody thinks that.
link |
01:16:50.920
And so like, I mean, they are great at speech recognition.
link |
01:16:54.440
But the problems that I said were hard.
link |
01:16:56.520
I didn't literally say the emperor has no clothes.
link |
01:16:58.320
I said, this is a set of problems
link |
01:17:00.200
that humans are really good at.
link |
01:17:01.880
And it wasn't couched as AI,
link |
01:17:03.200
it was couched as cognitive science.
link |
01:17:04.400
But I said, if you wanna build a neural model
link |
01:17:07.800
of how humans do certain class of things,
link |
01:17:10.440
you're gonna have to change the architecture.
link |
01:17:12.040
And I stand by those claims.
link |
01:17:13.720
So, and I think people should understand
link |
01:17:16.840
you're quite entertaining in your cynicism,
link |
01:17:19.120
but you're also very optimistic and a dreamer
link |
01:17:22.320
about the future of AI too.
link |
01:17:24.000
So you're both, it's just.
link |
01:17:25.440
There's a famous saying about being,
link |
01:17:27.920
people overselling technology in the short run
link |
01:17:30.760
and underselling it in the long run.
link |
01:17:34.200
And so I actually end the book,
link |
01:17:37.240
Ernie Davis and I end our book with an optimistic chapter,
link |
01:17:40.600
which kind of killed Ernie
link |
01:17:41.760
because he's even more pessimistic than I am.
link |
01:17:44.440
He describes me as a contrarian and him as a pessimist.
link |
01:17:47.640
But I persuaded him that we should end the book
link |
01:17:49.880
with a look at what would happen
link |
01:17:52.680
if AI really did incorporate, for example,
link |
01:17:55.400
the common sense reasoning and the nativism
link |
01:17:57.320
and so forth, the things that we counseled for.
link |
01:17:59.680
And we wrote it and it's an optimistic chapter
link |
01:18:02.160
that AI suitably reconstructed so that we could trust it,
link |
01:18:05.920
which we can't now, could really be world changing.
link |
01:18:09.520
So on that point, if you look at the future
link |
01:18:12.160
trajectories of AI, people have worries
link |
01:18:15.400
about negative effects of AI,
link |
01:18:17.160
whether it's at the large existential scale
link |
01:18:21.040
or smaller short term scale of negative impact on society.
link |
01:18:25.240
So you write about trustworthy AI,
link |
01:18:27.160
how can we build AI systems that align with our values
link |
01:18:31.480
that make for a better world
link |
01:18:32.800
that we can interact with that we can trust?
link |
01:18:35.000
The first thing we have to do
link |
01:18:35.840
is to replace deep learning with deep understanding.
link |
01:18:38.240
So you can't have alignment with a system
link |
01:18:42.440
that traffics only in correlations
link |
01:18:44.600
and doesn't understand concepts like bottles or harm.
link |
01:18:47.880
So you, Asimov talked about these famous laws
link |
01:18:51.320
and the first one was first do no harm.
link |
01:18:54.040
And you can quibble about the details of Asimov's laws,
link |
01:18:56.880
but we have to, if we're gonna build real robots
link |
01:18:58.800
in the real world, have something like that.
link |
01:19:00.560
That means we have to program in a notion
link |
01:19:02.520
that's at least something like harm.
link |
01:19:04.240
That means we have to have these more abstract ideas
link |
01:19:06.600
that deep learning is not particularly good at.
link |
01:19:08.480
They have to be in the mix somewhere.
link |
01:19:10.600
And you could do statistical analysis
link |
01:19:12.360
about probabilities of given harms or whatever,
link |
01:19:14.360
but you have to know what a harm is
link |
01:19:15.800
in the same way that you have to understand
link |
01:19:17.400
that a bottle isn't just a collection of pixels.
link |
01:19:20.640
And also be able to, you're implying
link |
01:19:24.000
that you need to also be able to communicate that to humans.
link |
01:19:26.840
So the AI systems would be able to prove to humans
link |
01:19:31.600
that they understand that they know what harm means.
link |
01:19:35.440
I might run it in the reverse direction,
link |
01:19:37.360
but roughly speaking, I agree with you.
link |
01:19:38.600
So we probably need to have committees of wise people,
link |
01:19:43.360
ethicists and so forth, think about what these rules
link |
01:19:46.800
ought to be and we should just leave it
link |
01:19:48.320
to software engineers.
link |
01:19:49.560
It shouldn't just be software engineers
link |
01:19:51.600
and it shouldn't just be people
link |
01:19:53.880
who own large mega corporations that are good at technology.
link |
01:19:58.280
Ethicists and so forth should be involved,
link |
01:20:00.240
but there should be some assembly of wise people
link |
01:20:04.640
as I was putting it that tries to figure out
link |
01:20:07.200
what the rules ought to be.
link |
01:20:08.680
And those have to get translated into code.
link |
01:20:12.440
You can argue or code or neural networks or something.
link |
01:20:15.440
They have to be translated into something
link |
01:20:18.640
that machines can work with.
link |
01:20:20.000
And that means there has to be a way
link |
01:20:21.920
of working the translation.
link |
01:20:23.400
And right now we don't.
link |
01:20:24.480
We don't have a way.
link |
01:20:25.360
So let's say you and I were the committee
link |
01:20:27.080
and we decide that Asimov's first law is actually right.
link |
01:20:29.880
And let's say it's not just two white guys,
link |
01:20:31.640
which would be kind of unfortunate
link |
01:20:32.880
and then we have a broad.
link |
01:20:34.040
And so we've represented a sample of the world
link |
01:20:36.320
or however we want to do this.
link |
01:20:37.560
And the committee decides eventually,
link |
01:20:40.480
okay, Asimov's first law is actually pretty good.
link |
01:20:42.840
There are these exceptions to it.
link |
01:20:44.080
We want to program in these exceptions,
link |
01:20:46.080
but let's start with just the first one
link |
01:20:47.520
and then we'll get to the exceptions.
link |
01:20:48.920
First one is first do no harm.
link |
01:20:50.680
Well, somebody has to now actually turn that
link |
01:20:53.320
into a computer program or a neural network or something.
link |
01:20:56.240
And one way of taking the whole book,
link |
01:20:58.800
the whole argument that I'm making
link |
01:21:00.320
is that we just don't have to do that yet
link |
01:21:02.520
and we're fooling ourselves if we think
link |
01:21:04.080
that we can build trustworthy AI.
link |
01:21:05.880
If we can't even specify in any kind of,
link |
01:21:09.560
we can't do it in Python
link |
01:21:10.680
and we can't do it in TensorFlow,
link |
01:21:13.160
we're fooling ourselves and thinking
link |
01:21:14.440
that we can make trustworthy AI
link |
01:21:15.840
if we can't translate harm into something
link |
01:21:18.800
that we can execute.
link |
01:21:19.960
And if we can't, then we should be thinking really hard,
link |
01:21:22.880
how could we ever do such a thing?
link |
01:21:24.680
Because if we're going to use AI
link |
01:21:26.560
in the ways that we want to use it to make job interviews
link |
01:21:29.240
or to do surveillance,
link |
01:21:31.120
not that I personally want to do that or whatever,
link |
01:21:32.480
I mean, if we're going to use AI
link |
01:21:33.800
in ways that have practical impact on people's lives
link |
01:21:36.240
or medicine, it's got to be able to understand stuff like that.
link |
01:21:41.240
So one of the things your book highlights
link |
01:21:42.880
is that a lot of people in the deep learning community,
link |
01:21:47.440
but also the general public, politicians,
link |
01:21:50.240
just people in all general groups and walks of life
link |
01:21:53.240
have a different levels of misunderstanding of AI.
link |
01:21:57.360
So when you talk about committees,
link |
01:21:59.480
what's your advice to our society?
link |
01:22:05.640
How do we grow?
link |
01:22:06.480
How do we learn about AI such that
link |
01:22:09.080
such committees could emerge
link |
01:22:10.840
where large groups of people could have a productive discourse
link |
01:22:14.560
about how to build successful AI systems?
link |
01:22:17.840
Part of the reason we wrote the book
link |
01:22:19.680
was to try to inform those committees.
link |
01:22:22.080
So part of the reason we wrote the book
link |
01:22:23.560
was to inspire a future generation of students
link |
01:22:25.680
to solve what we think are the important problems.
link |
01:22:27.880
So a lot of the book is trying to pinpoint
link |
01:22:29.920
what we think are the hard problems
link |
01:22:31.280
where we think effort would most be rewarded.
link |
01:22:33.920
And part of it is to try to train people
link |
01:22:37.840
who talk about AI, but aren't experts in the field
link |
01:22:41.040
to understand what's realistic and what's not.
link |
01:22:43.560
One of my favorite parts in the book
link |
01:22:44.720
is the six questions you should ask.
link |
01:22:47.040
Anytime you read a media account,
link |
01:22:48.440
so number one is if somebody talks about something,
link |
01:22:51.160
look for the demo.
link |
01:22:52.000
If there's no demo, don't believe it.
link |
01:22:54.200
Like the demo that you can try.
link |
01:22:55.360
If you can't try it at home,
link |
01:22:56.520
maybe it doesn't really work that well yet.
link |
01:22:58.400
So if we don't have this example in the book,
link |
01:23:00.640
but if Sundar Pinchai says we have this thing
link |
01:23:04.160
that allows it to sound like human beings in conversation,
link |
01:23:08.440
you should ask, can I try it?
link |
01:23:10.400
And you should ask how general it is.
link |
01:23:11.880
And it turns out at that time,
link |
01:23:13.080
I'm alluding to Google Duplex when it was announced,
link |
01:23:15.440
it only worked on calling hairdressers,
link |
01:23:18.200
restaurants, and finding opening hours.
link |
01:23:20.000
That's not very general.
link |
01:23:20.840
That's narrow AI.
link |
01:23:22.240
And I'm not gonna ask your thoughts about Sophia, but yeah.
link |
01:23:25.400
I understand that's a really good question to ask
link |
01:23:28.040
of any kind of hype top idea.
link |
01:23:30.240
So Sophia has very good material written for her,
link |
01:23:32.600
but she doesn't understand the things that she's saying.
link |
01:23:35.400
So a while ago, you've written a book
link |
01:23:38.240
on the science of learning, which I think is fascinating,
link |
01:23:40.560
but the learning case studies of playing guitar.
link |
01:23:43.520
That's right.
link |
01:23:44.360
All guitar zero.
link |
01:23:45.200
I love guitar myself, I've been playing my whole life.
link |
01:23:47.360
So let me ask a very important question.
link |
01:23:50.240
What is your favorite song, rock song to listen to
link |
01:23:54.120
or try to play?
link |
01:23:56.280
Well, those would be different,
link |
01:23:57.120
but I'll say that my favorite rock song to listen to
link |
01:23:59.640
is probably all along the Watchtower,
link |
01:24:01.080
the Jimi Hendrix version.
link |
01:24:02.000
The Jimi Hendrix version.
link |
01:24:03.000
It just feels magic to me.
link |
01:24:04.880
I've actually recently learned that I love that song.
link |
01:24:07.040
I've been trying to put it on YouTube myself singing.
link |
01:24:09.360
Singing is the scary part.
link |
01:24:11.280
If you could party with a rock star for a weekend,
link |
01:24:13.360
living or dead, who would you choose?
link |
01:24:17.760
And pick their mind,
link |
01:24:18.640
it's not necessarily about the party.
link |
01:24:21.160
Thanks for the clarification. I guess John Lennon
link |
01:24:25.640
is such an intriguing person and I think a troubled person,
link |
01:24:29.640
but an intriguing one.
link |
01:24:31.240
So beautiful.
link |
01:24:32.480
Well, Imagine is one of my favorite songs.
link |
01:24:35.480
Also one of my favorite songs.
link |
01:24:37.120
That's a beautiful way to end it.
link |
01:24:38.320
Gary, thank you so much for talking to me.
link |
01:24:39.800
Thanks so much for having me.