back to index

Peter Norvig: Artificial Intelligence: A Modern Approach | Lex Fridman Podcast #42


small model | large model

link |
00:00:00.000
The following is a conversation with Peter Norvig.
link |
00:00:02.960
He's the director of research at Google and the coauthor
link |
00:00:06.000
with Stuart Russell of the book, Artificial Intelligence
link |
00:00:09.360
and Modern Approach, that educated and inspired
link |
00:00:12.840
a whole generation of researchers, including myself,
link |
00:00:15.680
to get into the field of artificial intelligence.
link |
00:00:18.880
This is the Artificial Intelligence Podcast.
link |
00:00:21.760
If you enjoy it, subscribe on YouTube,
link |
00:00:24.160
give it five stars on iTunes, support on Patreon,
link |
00:00:27.200
or simply connect with me on Twitter.
link |
00:00:29.080
Alex Friedman, spelled F R I D M A N.
link |
00:00:32.800
And now, here's my conversation with Peter Norvig.
link |
00:00:37.720
Most researchers in the AI community, including myself,
link |
00:00:40.800
own all three editions, red, green, and blue,
link |
00:00:43.080
of the Artificial Intelligence and Modern Approach.
link |
00:00:46.480
It's a field defining textbook as many people are aware
link |
00:00:49.360
that you wrote with Stuart Russell.
link |
00:00:52.120
How has the book changed and how have you changed
link |
00:00:55.320
in relation to it from the first edition to the second
link |
00:00:57.880
to the third and now fourth edition as you work on it?
link |
00:01:00.840
Yeah, so it's been a lot of years, a lot of changes.
link |
00:01:04.320
One of the things changing from the first to maybe the second
link |
00:01:07.840
or third was just the rise of computing power, right?
link |
00:01:13.000
So I think in the first edition, we said,
link |
00:01:17.800
here's predicate logic, but that only goes so far
link |
00:01:22.560
because pretty soon you have millions of short little
link |
00:01:27.560
predicate expressions and they couldn't possibly
link |
00:01:29.560
fit in memory, so we're gonna use first order logic
link |
00:01:33.200
that's more concise.
link |
00:01:35.720
And then we quickly realized, oh, predicate logic
link |
00:01:39.360
is pretty nice because there are really fast
link |
00:01:42.360
SAT solvers and other things and look,
link |
00:01:44.840
there's only millions of expressions
link |
00:01:46.360
and that fits easily into memory
link |
00:01:48.280
or maybe even billions fit into memory now.
link |
00:01:51.200
So that was a change of the type of technology we needed
link |
00:01:54.560
just because the hardware expanded.
link |
00:01:56.720
Even to the second edition, the resource constraints
link |
00:01:59.120
were loosened significantly for the second.
link |
00:02:01.880
And that was the early 2000s second edition.
link |
00:02:04.880
Right, so 95 was the first and then 2000, 2001 or so.
link |
00:02:10.520
And then moving on from there,
link |
00:02:12.280
I think we're starting to see that again with the GPUs
link |
00:02:17.040
and then more specific type of machinery like the TPUs
link |
00:02:21.800
and we're seeing custom ASICs and so on for deep learning.
link |
00:02:26.280
So we're seeing another advance in terms of the hardware.
link |
00:02:30.520
Then I think another thing that we especially noticed
link |
00:02:33.640
this time around is in all three of the first editions,
link |
00:02:37.160
we kind of said, well, we're gonna find AI
link |
00:02:40.160
as maximizing expected utility
link |
00:02:43.000
and you tell me your utility function
link |
00:02:45.520
and now we've got 27 chapters
link |
00:02:48.040
with the cool techniques for how to optimize that.
link |
00:02:51.800
I think in this edition, we're saying more, you know what?
link |
00:02:55.280
Maybe that optimization part is the easy part
link |
00:02:58.120
and the hard part is deciding what is my utility function?
link |
00:03:01.600
What do I want?
link |
00:03:03.000
And if I'm a collection of agents or a society,
link |
00:03:06.320
what do we want as a whole?
link |
00:03:08.360
So you touch that topic in this edition,
link |
00:03:10.080
you get a little bit more into utility.
link |
00:03:11.920
Yeah, yeah.
link |
00:03:12.760
That's really interesting.
link |
00:03:13.600
On a technical level, we're almost pushing the philosophical.
link |
00:03:17.480
I guess it is philosophical, right?
link |
00:03:19.280
So we've always had a philosophy chapter,
link |
00:03:21.560
which I was glad that we were supporting.
link |
00:03:27.280
And now it's less kind of the Chinese room type argument
link |
00:03:32.920
and more of these ethical and societal type issues.
link |
00:03:37.480
So we get into the issues of fairness and bias
link |
00:03:41.840
and just the issue of aggregating utilities.
link |
00:03:45.880
So how do you encode human values into a utility function?
link |
00:03:49.760
Is this something that you can do purely through data
link |
00:03:53.480
in a learned way or is there some systematic?
link |
00:03:56.800
Obviously, there's no good answers yet.
link |
00:03:58.520
There's just beginnings to this,
link |
00:04:01.520
to even opening the door to these questions.
link |
00:04:02.920
So there is no one answer.
link |
00:04:04.280
Yes, there are techniques to try to learn that.
link |
00:04:07.480
So we talk about inverse reinforcement learning, right?
link |
00:04:10.760
So reinforcement learning, you take some actions,
link |
00:04:14.080
you get some rewards,
link |
00:04:15.400
and you figure out what actions you should take.
link |
00:04:18.000
And inverse reinforcement learning,
link |
00:04:20.160
you observe somebody taking actions
link |
00:04:23.000
and you figure out,
link |
00:04:24.520
well, this must be what they were trying to do.
link |
00:04:27.200
If they did this action, it must be because they wanted it.
link |
00:04:30.360
Of course, there's restrictions to that, right?
link |
00:04:32.960
So lots of people take actions that are self destructive
link |
00:04:37.120
or they're suboptimal in a certain way.
link |
00:04:39.160
So you don't want to learn that.
link |
00:04:40.640
You want to somehow learn the perfect actions
link |
00:04:44.800
rather than the ones they actually take.
link |
00:04:46.480
So that's a challenge for that field.
link |
00:04:51.320
Then another big part of it is just kind of theoretical
link |
00:04:55.760
of saying what can we accomplish?
link |
00:04:58.680
And so you look at like this work on the programs
link |
00:05:04.440
to predict recidivism and decide who should get parole
link |
00:05:10.320
or who should get bail or whatever.
link |
00:05:12.840
And how are you gonna evaluate that?
link |
00:05:14.600
And one of the big issues is fairness
link |
00:05:17.520
across protected classes,
link |
00:05:19.600
protected classes being things like sex and race and so on.
link |
00:05:24.600
And so two things you want is you want to say,
link |
00:05:28.480
well, if I get a score of say a six out of 10,
link |
00:05:32.640
then I want that to mean the same,
link |
00:05:34.960
whether no matter what race I'm on, right?
link |
00:05:37.640
So I want to have a 60% chance of reoccurring regardless.
link |
00:05:42.640
Regardless, and one of the makers of a commercial program
link |
00:05:48.120
to do that says, that's what we're trying to optimize.
link |
00:05:50.080
And look, we achieved that.
link |
00:05:51.320
We've reached that kind of balance.
link |
00:05:56.160
And then on the other side, you also want to say,
link |
00:05:59.720
well, if it makes mistakes,
link |
00:06:01.880
I want that to affect both sides of the protected class
link |
00:06:05.840
equally and it turns out they don't do that, right?
link |
00:06:09.040
So they're twice as likely to make a mistake
link |
00:06:12.200
that would harm a black person over a white person.
link |
00:06:14.840
So that seems unfair.
link |
00:06:16.520
So you'd like to say, well,
link |
00:06:17.360
I want to achieve both those goals.
link |
00:06:19.640
And then turns out you do the analysis
link |
00:06:21.400
and it's theoretically impossible to achieve both those goals.
link |
00:06:24.160
So you have to trade them off one against the other.
link |
00:06:27.160
So that analysis is really helpful
link |
00:06:29.080
to know what you can aim for and how much you can get,
link |
00:06:32.320
that you can't have everything.
link |
00:06:33.960
But the analysis certainly can't tell you
link |
00:06:35.520
where should we make that trade off point.
link |
00:06:38.520
But nevertheless, then we can, as humans, deliberate
link |
00:06:42.000
where that trade off should be.
link |
00:06:43.360
Yeah, so at least now we're arguing in an informed way.
link |
00:06:45.880
We're not asking for something impossible.
link |
00:06:48.280
We're saying, here's where we are
link |
00:06:50.120
and here's what we aim for.
link |
00:06:51.760
And this strategy is better than that strategy.
link |
00:06:55.880
So that's, I would argue,
link |
00:06:57.560
is a really powerful and really important first step.
link |
00:07:00.600
But it's a doable one,
link |
00:07:01.720
sort of removing undesirable degrees of bias in systems
link |
00:07:06.720
in terms of protective classes.
link |
00:07:08.920
And then there's something,
link |
00:07:09.760
I listened to your commencement speech,
link |
00:07:12.480
or there's some fuzzier things like,
link |
00:07:15.560
you mentioned angry birds.
link |
00:07:17.240
Do you want to create systems that feed the dopamine
link |
00:07:21.760
enjoyment, that feed, that optimize for you
link |
00:07:25.280
returning to the system,
link |
00:07:26.720
enjoying the moment of playing the game,
link |
00:07:29.080
of getting likes or whatever this kind of thing,
link |
00:07:32.000
or some kind of long term improvement.
link |
00:07:34.880
Right. Are you even thinking about that?
link |
00:07:40.800
That's really going to the philosophical area.
link |
00:07:43.760
I think that's a really important issue too,
link |
00:07:45.760
certainly thinking about that.
link |
00:07:46.800
I don't think about that as an AI issue as much.
link |
00:07:52.280
But as you say, the point is we've built this society
link |
00:07:58.520
in this infrastructure where we say
link |
00:08:01.840
we have a marketplace for attention
link |
00:08:04.520
and we've decided as a society
link |
00:08:08.120
that we like things that are free.
link |
00:08:10.280
And so we want all apps on our phone to be free.
link |
00:08:13.800
And that means they're all competing for your attention
link |
00:08:16.200
and then eventually they make some money some way
link |
00:08:18.800
through ads or in game sales or whatever.
link |
00:08:23.200
But they can only win by defeating all the other apps
link |
00:08:27.400
by instilling your attention.
link |
00:08:29.520
And we build a marketplace where it seems
link |
00:08:34.360
like they're working against you rather than working with you.
link |
00:08:39.040
And I'd like to find a way where we can change
link |
00:08:41.880
the playing field so we feel more like,
link |
00:08:43.920
well, these things are on my side.
link |
00:08:46.760
Yes, they're letting me have some fun in the short term,
link |
00:08:49.720
but they're also helping me in the long term
link |
00:08:53.160
rather than competing against me.
link |
00:08:55.000
And those aren't necessarily conflicting objectives.
link |
00:08:57.200
They're just the incentives, the direct current incentives
link |
00:09:01.280
as we try to figure out this whole new world
link |
00:09:03.160
that seemed to be on the easier part of that,
link |
00:09:06.680
which is feeding the dopamine, the rush.
link |
00:09:09.240
Right.
link |
00:09:10.080
But let me be taking a quick step back
link |
00:09:15.600
at the beginning of the artificial intelligence
link |
00:09:18.040
and modern approach book of writing.
link |
00:09:20.160
So here you are in the 90s,
link |
00:09:22.280
when you first sat down with Stuart to write the book
link |
00:09:25.760
to cover an entire field,
link |
00:09:27.880
which is one of the only books that's successfully done
link |
00:09:30.480
at F4AI and actually in a lot of other computer science
link |
00:09:33.720
fields, it's a huge undertaking.
link |
00:09:37.360
So it must have been quite daunting.
link |
00:09:40.800
What was that process like?
link |
00:09:42.080
Did you envision that you would be trying to cover
link |
00:09:44.920
the entire field?
link |
00:09:47.240
Was there a systematic approach to it
link |
00:09:48.840
that was more step by step?
link |
00:09:50.360
How did it feel?
link |
00:09:52.160
So I guess it came about,
link |
00:09:54.400
go to lunch with the other AI faculty at Berkeley
link |
00:09:57.400
and we'd say the field is changing,
link |
00:10:00.720
seems like the current books are a little bit behind,
link |
00:10:03.640
nobody's come out with a new book recently,
link |
00:10:05.240
we should do that.
link |
00:10:06.840
And everybody said, yeah, yeah,
link |
00:10:07.760
that's a great thing to do.
link |
00:10:09.080
And we never did anything.
link |
00:10:10.040
Right.
link |
00:10:11.080
And then I ended up heading off to industry.
link |
00:10:14.360
I went to Sun Labs.
link |
00:10:15.960
So I thought, well, that's the end
link |
00:10:17.120
of my possible academic publishing career.
link |
00:10:21.760
But I met Stuart again at a conference like a year later
link |
00:10:25.200
and said, you know, that book we were always talking about,
link |
00:10:28.200
you guys must be half done with it by now, right?
link |
00:10:30.360
And he said, well, we keep talking, we never do anything.
link |
00:10:34.120
So I said, well, you know, we should do it.
link |
00:10:36.080
And I think the reason is that we all felt
link |
00:10:40.560
it was a time where the field was changing.
link |
00:10:44.600
And that was in two ways.
link |
00:10:47.920
So, you know, the good old fashioned AI
link |
00:10:50.200
was based primarily on Boolean logic.
link |
00:10:53.320
And you had a few tricks to deal with uncertainty.
link |
00:10:56.800
And it was based primarily on knowledge engineering,
link |
00:11:00.080
that the way you got something done is you went out
link |
00:11:01.960
and you interviewed an expert
link |
00:11:03.000
and you wrote down by hand everything they knew.
link |
00:11:06.560
And we saw in 95 that the field was changing in two ways.
link |
00:11:11.560
One, we were moving more towards probability
link |
00:11:14.800
rather than Boolean logic.
link |
00:11:16.280
And we were moving more towards machine learning
link |
00:11:18.680
rather than knowledge engineering.
link |
00:11:21.360
And the other books hadn't caught that way
link |
00:11:23.960
if they were still in the, more in the old school,
link |
00:11:27.640
although certainly they had part of that on the way.
link |
00:11:30.840
But we said, if we start now completely taking
link |
00:11:34.600
that point of view, we can have a different kind of book
link |
00:11:37.600
and we were able to put that together.
link |
00:11:40.800
And what was literally the process, if you remember?
link |
00:11:45.280
Did you start writing a chapter?
link |
00:11:46.800
Did you outline?
link |
00:11:48.680
Yeah, I guess we did an outline
link |
00:11:50.640
and then we sort of assigned chapters to each person.
link |
00:11:56.000
At the time, I had moved to Boston
link |
00:11:58.280
and Stuart was in Berkeley.
link |
00:12:00.120
So basically we did it over the internet.
link |
00:12:04.480
And that wasn't the same as doing it today.
link |
00:12:08.040
It meant dial up lines and telnetting in.
link |
00:12:15.200
You telnetted into one shell
link |
00:12:19.360
and you type cat file name
link |
00:12:21.080
and you hoped it was captured at the other end.
link |
00:12:23.880
And certainly you're not sending images
link |
00:12:26.160
and figures back and forth.
link |
00:12:27.200
Right, right, that didn't work.
link |
00:12:29.680
But did you anticipate where the field would go
link |
00:12:34.080
from that day, from the 90s?
link |
00:12:37.720
Did you see the growth into learning based methods
link |
00:12:42.960
and to data driven methods
link |
00:12:44.640
that followed in the future decades?
link |
00:12:47.080
We certainly thought that learning was important.
link |
00:12:51.960
I guess we missed it as being as important as it is today.
link |
00:12:58.080
We missed this idea of big data.
link |
00:13:00.120
We missed that the idea of deep learning
link |
00:13:02.800
hadn't been invented yet.
link |
00:13:04.480
We could have taken the book
link |
00:13:07.520
from a complete machine learning point of view
link |
00:13:11.200
right from the start.
link |
00:13:12.440
We chose to do it more from a point of view
link |
00:13:15.080
of we're gonna first develop the different types
link |
00:13:17.480
of representations and we're gonna talk
link |
00:13:20.160
about different types of environments.
link |
00:13:24.040
Is it fully observable or partially observable
link |
00:13:26.640
and is it deterministic or stochastic and so on?
link |
00:13:29.760
And we made it more complex along those axes
link |
00:13:33.400
rather than focusing on the machine learning axis first.
link |
00:13:38.040
Do you think, there's some sense in which
link |
00:13:40.880
the deep learning craze is extremely successful
link |
00:13:44.200
for a particular set of problems?
link |
00:13:46.360
And eventually it's going to, in the general case,
link |
00:13:51.040
hit challenges.
link |
00:13:52.560
So in terms of the difference between perception systems
link |
00:13:56.320
and robots that have to act in the world,
link |
00:13:59.040
do you think we're gonna return to AI,
link |
00:14:02.720
modern approach type breadth in addition five and six
link |
00:14:09.200
in future decades?
link |
00:14:11.400
Do you think deep learning will take its place
link |
00:14:13.280
as a chapter in this bigger view of AI?
link |
00:14:17.920
Yeah, I think we don't know yet
link |
00:14:19.320
how it's all gonna play out.
link |
00:14:21.120
So in the new edition, we have a chapter on deep learning.
link |
00:14:26.280
We got Ian Goodfellow to be the guest author
link |
00:14:29.520
for that chapter, so he said he could condense
link |
00:14:32.520
his whole deep learning book into one chapter.
link |
00:14:36.000
I think he did a great job.
link |
00:14:38.280
We were also encouraged that we gave him
link |
00:14:41.440
the old neural net chapter and said,
link |
00:14:45.600
I had fun with it.
link |
00:14:46.440
Modernize that, and he said, you know,
link |
00:14:48.120
half of that was okay.
link |
00:14:50.280
That certainly there's lots of new things
link |
00:14:52.960
that have been developed, but some of the core
link |
00:14:55.360
was still the same.
link |
00:14:58.000
So I think we'll gain a better understanding
link |
00:15:02.360
of what you can do there.
link |
00:15:04.240
I think we'll need to incorporate all the things
link |
00:15:07.680
we can do with the other technologies, right?
link |
00:15:10.040
So deep learning started out,
link |
00:15:13.200
convolutional networks, and very close to perception.
link |
00:15:18.880
And it's since moved to be able to do more
link |
00:15:23.280
with actions and some degree of longer term planning.
link |
00:15:28.680
But we need to do a better job
link |
00:15:30.160
with representation and reasoning
link |
00:15:32.640
and one shot learning and so on.
link |
00:15:36.320
And I think we don't know yet
link |
00:15:39.720
how that's gonna play out.
link |
00:15:41.120
So do you think looking at some success,
link |
00:15:45.880
but certainly eventual demise,
link |
00:15:49.840
a partial demise of experts
link |
00:15:51.520
to symbolic systems in the 80s,
link |
00:15:54.160
do you think there is kernels of wisdom
link |
00:15:56.560
in the work that was done there
link |
00:15:59.040
with logic and reasoning and so on
link |
00:16:01.080
that will rise again in your view?
link |
00:16:05.680
So certainly I think the idea of representation
link |
00:16:08.640
and reasoning is crucial,
link |
00:16:10.360
that sometimes you just don't have enough data
link |
00:16:14.000
about the world to learn de novo.
link |
00:16:17.360
So you've got to have some idea of representation,
link |
00:16:22.000
whether that was programmed in or told or whatever,
link |
00:16:24.960
and then be able to take steps of reasoning.
link |
00:16:28.600
I think the problem with the good old fashioned AI
link |
00:16:33.600
was one, we tried to base everything on these symbols
link |
00:16:38.680
that were atomic and that's great
link |
00:16:42.080
if you're like trying to define the properties of a triangle.
link |
00:16:46.160
Because they have necessary and sufficient conditions.
link |
00:16:49.520
But things in the real world don't.
link |
00:16:50.960
The real world is messy and doesn't have sharp edges
link |
00:16:54.160
and atomic symbols do.
link |
00:16:56.320
So that was a poor match.
link |
00:16:58.160
And then the other aspect was that the reasoning
link |
00:17:05.760
was universal and applied anywhere,
link |
00:17:09.800
which in some sense is good,
link |
00:17:11.160
but it also means there's no guidance
link |
00:17:13.320
as to where to apply.
link |
00:17:15.200
And so you started getting these paradoxes,
link |
00:17:17.800
like well, if I have a mountain
link |
00:17:20.680
and I remove one grain of sand,
link |
00:17:23.040
then it's still a mountain.
link |
00:17:25.160
But if I do that repeatedly at some point, it's not.
link |
00:17:28.160
Right?
link |
00:17:29.000
And with logic, there's nothing to stop you
link |
00:17:32.240
from applying things repeatedly.
link |
00:17:37.320
But maybe with something like deep learning,
link |
00:17:42.000
and I don't really know what the right name for it is,
link |
00:17:44.640
we could separate out those ideas.
link |
00:17:46.200
So one, we could say a mountain isn't just an atomic notion.
link |
00:17:51.200
It's some sort of something like a word embedding
link |
00:17:56.040
that has a more complex representation.
link |
00:18:02.280
And secondly, we could somehow learn,
link |
00:18:05.080
yeah, there's this rule that you can remove one grain of sand.
link |
00:18:08.080
You can do that a bunch of times,
link |
00:18:09.280
but you can't do it a near infinite amount of times.
link |
00:18:12.880
But on the other hand,
link |
00:18:13.760
when you're doing induction on the integer, sure,
link |
00:18:16.200
then it's fine to do it an infinite number of times.
link |
00:18:18.800
And if we could, somehow we have to learn
link |
00:18:22.160
when these strategies are applicable,
link |
00:18:24.680
rather than having the strategies be completely neutral
link |
00:18:28.280
and available everywhere.
link |
00:18:31.240
Anytime you use neural networks,
link |
00:18:32.440
anytime you learn from data,
link |
00:18:34.400
form representation from data in an automated way,
link |
00:18:37.000
it's not very explainable as to,
link |
00:18:41.080
or it's not introspective to us humans
link |
00:18:45.120
in terms of how this neural network sees the world,
link |
00:18:48.240
where why does it succeed so brilliantly in so many cases
link |
00:18:53.280
and fail so miserably in surprising ways and small.
link |
00:18:56.480
So what do you think is the future there?
link |
00:19:01.000
Can simply more data, better data,
link |
00:19:03.480
more organized data solve that problem?
link |
00:19:06.120
Or is there elements of symbolic systems
link |
00:19:09.280
that need to be brought in,
link |
00:19:10.360
which are a little bit more explainable?
link |
00:19:12.120
Yeah, so I prefer to talk about trust
link |
00:19:16.800
and validation and verification
link |
00:19:20.320
rather than just about explainability.
link |
00:19:22.480
And then I think explanations are one tool
link |
00:19:25.240
that you use towards those goals.
link |
00:19:28.880
And I think it is important issue
link |
00:19:30.600
that we don't wanna use these systems
link |
00:19:32.760
unless we trust them
link |
00:19:33.920
and we wanna understand where they work
link |
00:19:35.480
and where they don't work.
link |
00:19:37.040
And an explanation can be part of that, right?
link |
00:19:40.760
So I apply for a loan and I get denied,
link |
00:19:44.400
I want some explanation of why.
link |
00:19:46.120
And you have in Europe,
link |
00:19:49.200
we have the GDPR that says you're required
link |
00:19:51.640
to be able to get that.
link |
00:19:53.880
But on the other hand,
link |
00:19:54.840
an explanation alone is not enough, right?
link |
00:19:57.200
So we are used to dealing with people
link |
00:20:01.240
and with organizations and corporations and so on
link |
00:20:04.800
and they can give you an explanation
link |
00:20:06.200
and you have no guarantee
link |
00:20:07.320
that that explanation relates to reality, right?
link |
00:20:11.160
So the bank can tell me,
link |
00:20:12.560
well, you didn't get the loan
link |
00:20:13.960
because you didn't have enough collateral
link |
00:20:16.080
and that may be true or it may be true
link |
00:20:18.200
that they just didn't like my religion or something else.
link |
00:20:22.480
I can't tell from the explanation.
link |
00:20:24.600
And that's true whether the decision was made
link |
00:20:27.640
by a computer or by a person.
link |
00:20:30.880
So I want more,
link |
00:20:33.360
I do wanna have the explanations
link |
00:20:35.040
and I wanna be able to have a conversation
link |
00:20:37.280
to go back and forth and said,
link |
00:20:39.320
well, you gave this explanation, but what about this?
link |
00:20:41.920
And what would have happened if this had happened?
link |
00:20:44.200
And what would I need to change that?
link |
00:20:48.000
So I think a conversation is a better way to think about it
link |
00:20:50.880
than just an explanation as a single output.
link |
00:20:55.320
And I think we need testing of various kinds, right?
link |
00:20:58.040
So in order to know,
link |
00:21:00.720
was the decision really based on my collateral
link |
00:21:03.440
or was it based on my religion or skin color or whatever?
link |
00:21:08.440
I can't tell if I'm only looking at my case,
link |
00:21:10.920
but if I look across all the cases,
link |
00:21:12.920
then I can detect a pattern, right?
link |
00:21:15.640
So you wanna have that kind of capability.
link |
00:21:18.360
You wanna have these adversarial testing, right?
link |
00:21:21.200
So we thought we were doing pretty good
link |
00:21:23.080
at object recognition and images.
link |
00:21:25.840
We said, look, we're at sort of pretty close
link |
00:21:28.520
to human level performance on ImageNet and so on.
link |
00:21:32.320
And then you start seeing these adversarial images
link |
00:21:34.840
and you say, wait a minute,
link |
00:21:36.240
that part is nothing like human performance.
link |
00:21:39.320
Hey, you can mess with it really easily.
link |
00:21:40.920
You can mess with it really easily, right?
link |
00:21:42.680
And yeah, you can do that to humans too, right?
link |
00:21:46.040
In a different way, perhaps.
link |
00:21:47.160
Right, humans don't know what color the dress was.
link |
00:21:49.480
Right.
link |
00:21:50.520
And so they're vulnerable to certain attacks
link |
00:21:52.480
that are different than the attacks on the machines,
link |
00:21:55.680
but the attacks on the machines are so striking.
link |
00:21:59.400
They really change the way you think about what we've done.
link |
00:22:03.040
And the way I think about it is,
link |
00:22:05.640
I think part of the problem is we're seduced
link |
00:22:08.280
by our low dimensional metaphors, right?
link |
00:22:13.600
Yeah, I like that phrase.
link |
00:22:15.720
You look in a textbook and you say,
link |
00:22:18.560
okay, now we've mapped out the space
link |
00:22:20.360
and cat is here and dog is here
link |
00:22:24.960
and maybe there's a tiny little spot in the middle
link |
00:22:27.560
where you can't tell the difference,
link |
00:22:28.600
but mostly we've got it all covered.
link |
00:22:30.720
And if you believe that metaphor,
link |
00:22:33.320
then you say, well, we're nearly there.
link |
00:22:35.040
And there's only gonna be a couple adversarial images,
link |
00:22:39.200
but I think that's the wrong metaphor
link |
00:22:40.600
and what you should really say is,
link |
00:22:42.280
it's not a 2D flat space that we've got mostly covered.
link |
00:22:45.960
It's a million dimension space
link |
00:22:47.640
and cat is this string that goes out in this crazy bath
link |
00:22:52.800
and if you step a little bit off the path in any direction,
link |
00:22:55.800
you're in nowhere's land
link |
00:22:57.800
and you don't know what's gonna happen.
link |
00:22:59.400
And so I think that's where we are
link |
00:23:01.160
and now we've got to deal with that.
link |
00:23:03.400
So it wasn't so much an explanation,
link |
00:23:06.160
but it was an understanding of what the models are
link |
00:23:09.960
and what they're doing
link |
00:23:10.800
and now we can start exploring how do you fix that?
link |
00:23:12.800
Yeah, validating the robustness of the system so on,
link |
00:23:15.280
but take it back to this word trust.
link |
00:23:20.000
Do you think we're a little too hard on our robots
link |
00:23:22.960
in terms of the standards we apply?
link |
00:23:25.680
So, you know, there's a dance.
link |
00:23:30.680
There's a dance and nonverbal
link |
00:23:34.080
and verbal communication between humans.
link |
00:23:36.480
You know, if we apply the same kind of standard
link |
00:23:38.880
in terms of humans, you know,
link |
00:23:40.720
we trust each other pretty quickly.
link |
00:23:43.320
You and I have met before
link |
00:23:45.600
and there's some degree of trust, right?
link |
00:23:48.320
That nothing's gonna go crazy wrong
link |
00:23:50.560
and yet to AI, when we look at AI systems
link |
00:23:53.560
where we seem to approach the skepticism always, always
link |
00:23:58.560
and it's like they have to prove through a lot of hard work
link |
00:24:02.960
that they're even worthy of even inkling of our trust.
link |
00:24:06.640
What do you think about that?
link |
00:24:07.960
How do we break that barrier, close that gap?
link |
00:24:11.120
I think that's right.
link |
00:24:11.960
I think that's a big issue.
link |
00:24:13.720
Just listening, my friend Mark Moffat is a naturalist
link |
00:24:18.760
and he says, the most amazing thing about humans
link |
00:24:22.160
is that you can walk into a coffee shop
link |
00:24:25.080
or a busy street in a city
link |
00:24:28.440
and there's lots of people around you
link |
00:24:30.440
that you've never met before
link |
00:24:32.000
and you don't kill each other.
link |
00:24:33.440
Yeah.
link |
00:24:34.560
He says chimpanzees cannot do that.
link |
00:24:36.520
Yeah, right.
link |
00:24:37.360
Right?
link |
00:24:38.640
If a chimpanzee's in a situation where here's some
link |
00:24:42.080
that aren't from my tribe, bad things happen.
link |
00:24:46.640
Especially in a coffee shop, there's delicious food around,
link |
00:24:48.640
you know.
link |
00:24:49.480
Yeah, yeah, but we humans have figured that out, right?
link |
00:24:53.080
And, you know, for the most part, we still go to war,
link |
00:24:56.600
we still do terrible things,
link |
00:24:58.160
but for the most part,
link |
00:24:59.240
we've learned to trust each other and live together.
link |
00:25:02.760
So that's gonna be important for our AI systems as well.
link |
00:25:08.400
And also, I think, you know, a lot of the emphasis is on AI,
link |
00:25:13.640
but in many cases, AI is part of the technology
link |
00:25:18.000
but isn't really the main thing.
link |
00:25:19.280
So a lot of what we've seen is more due
link |
00:25:22.800
to communications technology than AI technology.
link |
00:25:27.360
Yeah, you wanna make these good decisions,
link |
00:25:30.120
but the reason we're able to have any kind of system at all
link |
00:25:33.920
is we've got the communication
link |
00:25:35.840
so that we're collecting the data
link |
00:25:37.560
and so that we can reach lots of people around the world.
link |
00:25:41.520
I think that's a bigger change that we're dealing with.
link |
00:25:45.080
Speaking of reaching a lot of people around the world,
link |
00:25:47.800
on the side of education,
link |
00:25:49.640
you've, one of the many things in terms of education
link |
00:25:53.320
you've done, you've taught the Intro to Artificial
link |
00:25:55.880
Intelligence course that signed up 160,000 students.
link |
00:26:00.640
It was one of the first successful examples
link |
00:26:02.360
of a MOOC, massive open online course.
link |
00:26:06.800
What did you learn from that experience?
link |
00:26:09.160
What do you think is the future of MOOCs, of education online?
link |
00:26:12.880
Yeah, it was a great fun doing it,
link |
00:26:15.320
particularly being right at the start,
link |
00:26:19.960
just because it was exciting and new,
link |
00:26:21.680
but it also meant that we had less competition, right?
link |
00:26:24.920
So one of the things you hear about,
link |
00:26:27.840
well, the problem with MOOCs is the completion rates
link |
00:26:31.200
are so low, so there must be a failure.
link |
00:26:33.840
And I gotta admit, I'm a prime contributor, right?
link |
00:26:37.600
I've probably started 50 different courses
link |
00:26:40.800
that I haven't finished,
link |
00:26:42.400
but I got exactly what I wanted out of them
link |
00:26:44.240
because I had never intended to finish them.
link |
00:26:46.080
I just wanted to dabble in a little bit,
link |
00:26:48.680
either to see the topic matter
link |
00:26:50.320
or just to see the pedagogy of how are they doing this class.
link |
00:26:53.280
So I guess the main thing I learned is when I came in,
link |
00:26:58.080
I thought the challenge was information,
link |
00:27:03.160
saying, if I'm just take the stuff I want you to know,
link |
00:27:07.520
and I'm very clear and explain it well,
link |
00:27:10.560
then my job is done and good things are gonna happen.
link |
00:27:14.600
And then in doing the course,
link |
00:27:16.440
I learned, well, yeah, you gotta have the information,
link |
00:27:19.240
but really the motivation is the most important thing.
link |
00:27:23.040
But that if students don't stick with it,
link |
00:27:26.200
then it doesn't matter how good the content is.
link |
00:27:29.560
And I think being one of the first classes,
link |
00:27:32.840
we were helped by sort of exterior motivation.
link |
00:27:36.800
So we tried to do a good job of making it enticing
link |
00:27:39.400
and setting up ways for the community
link |
00:27:44.520
to work with each other to make it more motivating.
link |
00:27:47.040
But really a lot of it was, hey, this is a new thing
link |
00:27:49.560
and I'm really excited to be part of a new thing.
link |
00:27:51.600
And so the students brought their own motivation.
link |
00:27:54.520
And so I think this is great
link |
00:27:56.800
because there's lots of people around the world
link |
00:27:58.640
who have never had this before,
link |
00:28:03.560
would never have the opportunity to go to Stanford
link |
00:28:07.000
and take a class or go to MIT
link |
00:28:08.520
or go to one of the other schools.
link |
00:28:10.440
But now we can bring that to them.
link |
00:28:12.800
And if they bring their own motivation,
link |
00:28:15.760
they can be successful in a way they couldn't before.
link |
00:28:18.920
But that's really just the top tier of people
link |
00:28:21.600
that are ready to do that.
link |
00:28:22.800
The rest of the people just don't see
link |
00:28:27.000
or don't have their motivation
link |
00:28:29.520
and don't see how if they push through
link |
00:28:31.600
and we're able to do it,
link |
00:28:32.720
what advantage that would get them.
link |
00:28:34.680
So I think we got a long way to go
link |
00:28:36.240
before we were able to do that.
link |
00:28:37.920
And I think it'll be,
link |
00:28:38.960
some of it is based on technology,
link |
00:28:40.960
but more of it's based on the idea of community.
link |
00:28:44.000
You gotta actually get people together.
link |
00:28:46.160
Some of the getting together can be done online.
link |
00:28:49.360
I think some of it really has to be done in person
link |
00:28:51.800
in order to build that type of community and trust.
link |
00:28:56.440
You know, there's an intentional mechanism
link |
00:28:59.520
that we've developed a short attention span,
link |
00:29:02.680
especially younger people,
link |
00:29:04.480
because sort of short on short of videos online,
link |
00:29:08.840
there's a whatever the way the brain is developing now
link |
00:29:13.680
and with people that have grown up with the internet,
link |
00:29:16.720
they have quite a short attention span.
link |
00:29:18.480
So, and I would say I had the same
link |
00:29:21.120
when I was growing up too,
link |
00:29:22.320
probably for different reasons.
link |
00:29:23.960
So I probably wouldn't have learned as much as I have
link |
00:29:28.160
if I wasn't forced to sit in a physical classroom,
link |
00:29:31.400
sort of bored, sometimes falling asleep,
link |
00:29:34.040
but sort of forcing myself through that process
link |
00:29:36.680
to sometimes extremely difficult computer science courses.
link |
00:29:39.760
What's the difference in your view between in person
link |
00:29:44.960
education experience, which you first of all,
link |
00:29:48.200
you yourself had and you yourself taught
link |
00:29:50.000
and online education?
link |
00:29:52.120
And how do we close that gap if it's even possible?
link |
00:29:54.320
Yeah, so I think there's two issues.
link |
00:29:56.360
One is whether it's in person or online.
link |
00:30:00.760
So it's sort of the physical location.
link |
00:30:03.000
And then the other is kind of the affiliation, right?
link |
00:30:07.120
So you stuck with it in part
link |
00:30:10.920
because you were in the classroom
link |
00:30:12.560
and you saw everybody else was suffering
link |
00:30:14.640
the same way you were,
link |
00:30:17.440
but also because you were enrolled,
link |
00:30:20.160
you had paid tuition,
link |
00:30:22.200
sort of everybody was expecting you to stick with it.
link |
00:30:25.400
Society, parents, peers, yeah.
link |
00:30:29.440
And so those are two separate things.
link |
00:30:31.160
I mean, you could certainly imagine,
link |
00:30:33.000
I pay a huge amount of tuition
link |
00:30:35.240
and everybody signed up and says, yes, you're doing this,
link |
00:30:39.160
but then I'm in my room
link |
00:30:40.720
and my classmates are in different rooms, right?
link |
00:30:42.920
We could have things set up that way.
link |
00:30:45.960
So it's not just the online versus offline.
link |
00:30:48.840
I think what's more important is the commitment
link |
00:30:51.920
that you've made.
link |
00:30:53.920
And certainly it is important
link |
00:30:56.080
to have that kind of informal,
link |
00:30:59.960
I meet people outside of class,
link |
00:31:01.760
we talk together because we're all in it together.
link |
00:31:05.040
I think that's a really important,
link |
00:31:07.560
both in keeping your motivation
link |
00:31:10.120
and also that's where some
link |
00:31:11.440
of the most important learning goes on.
link |
00:31:13.440
So you wanna have that.
link |
00:31:15.360
Maybe, especially now,
link |
00:31:17.480
we start getting into higher bandwidths
link |
00:31:19.760
and augmented reality and virtual reality.
link |
00:31:22.560
You might be able to get that
link |
00:31:23.600
without being in the same physical place.
link |
00:31:25.920
Do you think it's possible we'll see a course at Stanford?
link |
00:31:30.720
For example, that for students,
link |
00:31:33.920
enrolled students is only online in the near future
link |
00:31:37.360
who are literally, it's part of the curriculum
link |
00:31:39.760
and there is no...
link |
00:31:41.200
Yeah, so you're starting to see that.
link |
00:31:42.640
I know Georgia Tech has a master's that's done that way.
link |
00:31:46.640
Oftentimes, it's sort of,
link |
00:31:48.400
they're creeping in in terms of master's program
link |
00:31:51.000
or sort of further education,
link |
00:31:54.320
considering the constraints of students and so on.
link |
00:31:56.640
But I mean, literally, is it possible
link |
00:31:58.640
that we just, you know, Stanford, MIT, Berkeley,
link |
00:32:02.760
all these places go online only in the next few decades?
link |
00:32:07.800
Yeah, probably not,
link |
00:32:08.760
because they've got a big commitment to a physical campus.
link |
00:32:13.280
Sure, so there's a momentum
link |
00:32:16.520
that's both financial and culturally.
link |
00:32:18.360
Right, and then there are certain things
link |
00:32:21.160
that's just hard to do virtually, right?
link |
00:32:25.080
So, you know, we're in a field where
link |
00:32:29.320
if you have your own computer and your own paper,
link |
00:32:32.680
and so on, you can do the work anywhere,
link |
00:32:36.800
but if you're in a biology lab or something,
link |
00:32:39.440
you know, you don't have all the right stuff at home.
link |
00:32:42.880
Right, so our field, programming,
link |
00:32:45.720
you've also done a lot of,
link |
00:32:47.440
you've done a lot of programming yourself.
link |
00:32:50.920
In 2001, you wrote a great article about programming
link |
00:32:54.320
called Teach Yourself Programming in 10 Years,
link |
00:32:57.320
sort of response to all the books
link |
00:32:59.360
that say Teach Yourself Programming in 21 Days.
link |
00:33:01.600
So if you were giving advice to someone
link |
00:33:03.000
getting into programming today,
link |
00:33:04.840
this is a few years since you've written that article,
link |
00:33:07.280
what's the best way to undertake that journey?
link |
00:33:10.880
I think there's lots of different ways,
link |
00:33:12.360
and I think programming means more things now.
link |
00:33:17.480
And I guess, you know, when I wrote that article,
link |
00:33:20.160
I was thinking more about becoming
link |
00:33:23.840
a professional software engineer.
link |
00:33:25.720
And I thought that's a, you know,
link |
00:33:27.680
sort of a career long field of study.
link |
00:33:30.460
But I think there's lots of things now
link |
00:33:33.340
that people can do where programming is a part
link |
00:33:37.620
of solving what they want to solve
link |
00:33:40.980
without achieving that professional level status, right?
link |
00:33:44.860
So I'm not going to be going
link |
00:33:45.820
and writing a million lines of code,
link |
00:33:47.660
but, you know, I'm a biologist or a physicist
link |
00:33:50.620
or something or even a historian,
link |
00:33:54.300
and I've got some data and I want to ask a question
link |
00:33:57.100
of that data.
link |
00:33:58.460
And I think for that, you don't need 10 years, right?
link |
00:34:02.140
So there are many shortcuts to being able
link |
00:34:05.500
to answer those kinds of questions.
link |
00:34:08.500
And, you know, you see today a lot of emphasis
link |
00:34:11.860
on learning to code, teaching kids how to code.
link |
00:34:16.740
I think that's great,
link |
00:34:18.780
but I wish they would change the message a little bit, right?
link |
00:34:22.100
So I think code isn't the main thing.
link |
00:34:24.740
I don't really care if you know the syntax
link |
00:34:27.140
of JavaScript or if you can connect these blocks together
link |
00:34:31.540
in this visual language.
link |
00:34:33.460
But what I do care about is that you can analyze a problem,
link |
00:34:38.260
you can think of a solution, you can carry out,
link |
00:34:43.740
you know, make a model, run that model, test the model,
link |
00:34:47.980
see the results, verify that they're reasonable,
link |
00:34:53.700
ask questions and answer them, all right?
link |
00:34:55.700
So it's more modeling and problem solving
link |
00:34:58.580
and you use coding in order to do that,
link |
00:35:01.900
but it's not just learning coding for its own sake.
link |
00:35:04.340
That's really interesting.
link |
00:35:05.220
So it's actually almost, in many cases,
link |
00:35:08.180
it's learning to work with data
link |
00:35:10.100
to extract something useful out of data.
link |
00:35:12.020
So when you say problem solving,
link |
00:35:13.700
you really mean taking some kind of,
link |
00:35:15.340
maybe collecting some kind of data set, cleaning it up
link |
00:35:18.780
and saying something interesting about it,
link |
00:35:20.340
which is useful in all kinds of domains.
link |
00:35:22.420
And you know, and I see myself being stuck sometimes
link |
00:35:28.100
in kind of the old ways, right?
link |
00:35:30.500
So, you know, I'll be working on a project,
link |
00:35:34.220
maybe with a younger employee and we say,
link |
00:35:37.700
oh, well here's this new package
link |
00:35:39.300
that could help solve this problem.
link |
00:35:42.340
And I'll go and I'll start reading the manuals
link |
00:35:44.500
and you know, I'll be two hours into reading the manuals
link |
00:35:48.220
and then my colleague comes back and says, I'm done.
link |
00:35:51.140
You know, I downloaded the package, I installed it,
link |
00:35:53.820
I tried calling some things, the first one didn't work,
link |
00:35:56.500
the second one didn't work, now I'm done.
link |
00:35:58.380
And I say, but I have under questions about
link |
00:36:00.820
how does this work and how does that work?
link |
00:36:02.100
And they say, who cares, right?
link |
00:36:04.140
I don't need to understand the whole thing.
link |
00:36:05.500
I answered my question, it's a big complicated package.
link |
00:36:09.180
I don't understand the rest of it,
link |
00:36:10.540
but I got the right answer.
link |
00:36:12.180
And I'm just, it's hard for me to get into that mindset.
link |
00:36:15.900
I want to understand the whole thing.
link |
00:36:17.620
And you know, if they wrote a manual,
link |
00:36:19.420
I should probably read it.
link |
00:36:21.380
And, but that's not necessarily the right way.
link |
00:36:23.380
And I think I have to get used to dealing with more,
link |
00:36:28.580
being more comfortable with uncertainty
link |
00:36:30.500
and not knowing everything.
link |
00:36:32.060
Yeah, so I struggle with the same instead of the,
link |
00:36:34.660
the spectrum between Donald and Don Knuth,
link |
00:36:37.740
looks kind of the very, you know,
link |
00:36:39.420
before you can say anything about a problem,
link |
00:36:42.460
he really has to get down to the machine code to assembly.
link |
00:36:45.940
And versus exactly what you said of several students
link |
00:36:50.180
in my group that, you know, like 20 years old,
link |
00:36:53.420
and they can solve almost any problem within a few hours
link |
00:36:56.780
that would take me probably weeks
link |
00:36:58.220
because I would try to, as you said, read the manual.
link |
00:37:00.940
So do you think the nature of mastery,
link |
00:37:04.340
you're mentioning biology, sort of outside disciplines,
link |
00:37:08.540
applying programming, but computer scientists.
link |
00:37:13.540
So over time, there's higher and higher levels
link |
00:37:16.460
of abstraction available now.
link |
00:37:18.380
So with this week, there's the TensorFlow Summit, right?
link |
00:37:23.740
So if you're not particularly into deep learning,
link |
00:37:27.540
but you're still a computer scientist,
link |
00:37:29.980
you can accomplish an incredible amount with TensorFlow
link |
00:37:33.220
without really knowing any fundamental internals
link |
00:37:35.980
of machine learning.
link |
00:37:37.500
Do you think the nature of mastery is changing,
link |
00:37:40.940
even for computer scientists, like what it means
link |
00:37:43.860
to be an expert programmer?
link |
00:37:45.700
Yeah, I think that's true.
link |
00:37:47.740
You know, we never really should have focused
link |
00:37:49.660
on programmer, right?
link |
00:37:51.500
Because it's still, it's a skill,
link |
00:37:53.660
and what we really want to focus on is the result.
link |
00:37:56.580
So we built this ecosystem where the way you can get stuff done
link |
00:38:01.300
is by programming it yourself.
link |
00:38:04.140
At least when I started it with it, you know,
link |
00:38:06.820
library functions meant you had square root,
link |
00:38:09.060
and that was about it, right?
link |
00:38:10.900
Everything else you built from scratch.
link |
00:38:13.020
And then we built up an ecosystem where a lot of times,
link |
00:38:16.100
well, you can download a lot of stuff
link |
00:38:17.420
that does a big part of what you need.
link |
00:38:20.180
And so now it's more a question of assembly
link |
00:38:23.700
rather than manufacturing.
link |
00:38:28.260
And that's a different way of looking at problems.
link |
00:38:32.180
From another perspective, in terms of mastery
link |
00:38:34.220
and looking at programmers or people that reason
link |
00:38:37.620
about problems in a computational way,
link |
00:38:39.740
so Google, you know, from the hiring perspective,
link |
00:38:44.100
from the perspective of hiring
link |
00:38:45.100
or building a team of programmers,
link |
00:38:47.420
how do you determine if someone's a good programmer?
link |
00:38:50.260
Or if somebody, again, so I want to deviate from,
link |
00:38:53.620
I want to move away from the word programmer,
link |
00:38:55.340
but somebody who can solve problems of large scale data
link |
00:38:58.740
and so on, what's, how do you build a team like that
link |
00:39:02.700
through the interviewing process?
link |
00:39:03.940
Yeah, and I think as a company grows,
link |
00:39:08.820
you get more expansive
link |
00:39:11.380
in the types of people you're looking for, right?
link |
00:39:14.380
So I think, you know, in the early days,
link |
00:39:16.540
we'd interview people and the question we were trying
link |
00:39:19.340
to ask is how close are they to Jeff Dean?
link |
00:39:24.980
And most people were pretty far away,
link |
00:39:26.740
but we take the ones that were, you know, not that far away.
link |
00:39:29.340
And so we got kind of a homogeneous group
link |
00:39:31.740
of people who are really great programmers.
link |
00:39:34.500
Then as a company grows, you say,
link |
00:39:36.940
well, we don't want everybody to be the same,
link |
00:39:39.060
to have the same skill set.
link |
00:39:40.620
And so now we're hiring biologists in our health areas
link |
00:39:47.340
and we're hiring physicists
link |
00:39:48.900
and we're hiring mechanical engineers
link |
00:39:51.140
and we're hiring, you know, social scientists
link |
00:39:54.740
and ethnographers and people with different backgrounds
link |
00:39:59.100
who bring different skills.
link |
00:40:01.700
So you have mentioned that you still make part
link |
00:40:06.020
taking code reviews, given that you have a wealth
link |
00:40:09.980
of experiences, you've also mentioned it.
link |
00:40:13.860
What errors do you often see and tend to highlight
link |
00:40:16.620
in the code of junior developers of people coming up now,
link |
00:40:20.020
given your background from Blisp
link |
00:40:23.420
to a couple of decades of programming?
link |
00:40:25.980
Yeah, that's a great question.
link |
00:40:28.380
You know, sometimes I try to look at flexibility
link |
00:40:31.900
of the design of, yes, this API solves this problem
link |
00:40:37.540
but where's it gonna go in the future?
link |
00:40:39.900
Who else is gonna wanna call this?
link |
00:40:41.940
And are you making it easier for them to do that?
link |
00:40:46.940
That's a matter of design, is it documentation,
link |
00:40:50.620
is it sort of an amorphous thing
link |
00:40:53.900
you can't really put into words, it's just how it feels.
link |
00:40:56.660
If you put yourself in the shoes of a developer,
link |
00:40:58.340
would you use this kind of thing?
link |
00:40:59.540
I think it is how you feel, right?
link |
00:41:01.500
And so yeah, documentation is good
link |
00:41:03.900
but it's more a design question, right?
link |
00:41:06.460
If you get the design right,
link |
00:41:07.620
then people will figure it out
link |
00:41:10.220
whether the documentation is good or not
link |
00:41:12.100
and if the design's wrong, then it'll be harder to use.
link |
00:41:16.180
How have you yourself changed as a programmer
link |
00:41:20.700
over the years in a way you already started to say,
link |
00:41:26.700
you want to read the manual,
link |
00:41:28.100
you want to understand the core of the syntax
link |
00:41:30.860
to how the language is supposed to be used and so on,
link |
00:41:33.780
but what's the evolution been like
link |
00:41:36.540
from the 80s, 90s to today?
link |
00:41:40.700
I guess one thing is you don't have to worry
link |
00:41:42.820
about the small details of efficiency
link |
00:41:46.380
as much as you used to, right?
link |
00:41:48.060
So like I remember I did my list book in the 90s
link |
00:41:53.380
and one of the things I wanted to do was say,
link |
00:41:56.300
here's how you do an object system
link |
00:41:58.900
and basically we're going to make it so each object
link |
00:42:02.460
is a hash table and you look up the methods
link |
00:42:04.700
and here's how it works and then I said,
link |
00:42:06.340
of course, the real common list object system
link |
00:42:10.940
is much more complicated,
link |
00:42:12.140
it's got all these efficiency type issues
link |
00:42:15.180
and this is just a toy and nobody would do this
link |
00:42:17.380
in real life and it turns out Python pretty much
link |
00:42:20.580
did exactly what I said and said objects are just dictionaries
link |
00:42:25.580
and yeah, they have a few little tricks as well,
link |
00:42:28.380
but mostly the thing that would have been 100 times
link |
00:42:33.380
too slow in the 80s is now plenty fast for most everything.
link |
00:42:37.380
So you had to, as a programmer, let go of perhaps
link |
00:42:41.380
an obsession that I remember coming up with
link |
00:42:44.180
of trying to write efficient code.
link |
00:42:46.580
Yeah, to say what really matters is the total time
link |
00:42:51.580
it takes to get the project done
link |
00:42:54.580
and most of that's going to be the programmer time,
link |
00:42:57.580
so if you're a little bit less efficient
link |
00:42:59.580
but it makes it easier to understand and modify,
link |
00:43:02.580
then that's the right trade off.
link |
00:43:04.580
So you've written quite a bit about Lisp,
link |
00:43:06.580
your book on programing is in Lisp,
link |
00:43:08.580
you have a lot of code out there that's in Lisp,
link |
00:43:11.580
so myself and people who don't know what Lisp is
link |
00:43:15.580
should look it up, it's my favorite language
link |
00:43:17.580
for many AI researchers,
link |
00:43:19.580
it is a favorite language,
link |
00:43:21.580
the favorite language they never use these days,
link |
00:43:24.580
so what part of the list do you find most beautiful
link |
00:43:26.580
and powerful?
link |
00:43:27.580
So I think the beautiful part is the simplicity
link |
00:43:30.580
that in half a page you can define the whole language
link |
00:43:34.580
and other languages don't have that,
link |
00:43:37.580
so you feel like you can hold everything in your head
link |
00:43:41.580
and then a lot of people say well then that's too simple,
link |
00:43:47.580
here's all these things I want to do
link |
00:43:49.580
and my Java or Python or whatever
link |
00:43:53.580
has 100 or 200 or 300 different syntax rules
link |
00:43:57.580
and don't I need all those,
link |
00:43:59.580
and Lisp's answer was no,
link |
00:44:01.580
we're only going to give you eight or so syntax rules,
link |
00:44:04.580
but we're going to allow you to define your own
link |
00:44:07.580
and so that was a very powerful idea
link |
00:44:10.580
and I think this idea of saying
link |
00:44:14.580
I can start with my problem and with my data
link |
00:44:19.580
and then I can build the language I want
link |
00:44:22.580
for that problem and for that data
link |
00:44:25.580
and then I can make Lisp define that language,
link |
00:44:28.580
so you're sort of mixing levels
link |
00:44:31.580
and saying I'm simultaneously a programmer
link |
00:44:34.580
in a language and a language designer
link |
00:44:37.580
and that allows a better match
link |
00:44:40.580
between your problem and your eventual code
link |
00:44:43.580
and I think Lisp had done that better than other languages.
link |
00:44:47.580
Yeah, it's a very elegant implementation
link |
00:44:49.580
of functional programming,
link |
00:44:51.580
but why do you think Lisp has not had
link |
00:44:54.580
the mass adoption and success of languages like Python?
link |
00:44:57.580
Is it the parentheses?
link |
00:44:59.580
Is it all the parentheses?
link |
00:45:01.580
Yeah, so I think a couple of things.
link |
00:45:05.580
So one was, I think it was designed
link |
00:45:08.580
for a single programmer or a small team
link |
00:45:12.580
and a skilled programmer who had the good taste
link |
00:45:16.580
to say, well, I am doing language design
link |
00:45:19.580
and I have to make good choices
link |
00:45:21.580
and if you make good choices, that's great.
link |
00:45:23.580
If you make bad choices, you can hurt yourself
link |
00:45:27.580
and it can be hard for other people on the team to understand it.
link |
00:45:30.580
So I think there was a limit to the scale
link |
00:45:33.580
of the size of a project in terms of number of people
link |
00:45:36.580
that Lisp was good for
link |
00:45:38.580
and as an industry, we kind of grew beyond that.
link |
00:45:42.580
I think it is in part the parentheses.
link |
00:45:46.580
One of the jokes is the acronym for Lisp
link |
00:45:49.580
is lots of irritating silly parentheses.
link |
00:45:52.580
My acronym was Lisp is syntactically pure
link |
00:45:57.580
saying all you need is parentheses and atoms.
link |
00:46:00.580
But I remember as we had the AI textbook
link |
00:46:04.580
and because we did it in the 90s,
link |
00:46:08.580
we had pseudocode in the book
link |
00:46:10.580
but then we said, well, we'll have Lisp online
link |
00:46:12.580
because that's the language of AI at the time.
link |
00:46:15.580
And I remember some of the students complaining
link |
00:46:17.580
because they hadn't had Lisp before
link |
00:46:19.580
and they didn't quite understand what was going on
link |
00:46:21.580
and I remember one student complained,
link |
00:46:23.580
I don't understand how this pseudocode
link |
00:46:25.580
corresponds to this Lisp
link |
00:46:28.580
and there was a one to one correspondence
link |
00:46:30.580
between the symbols in the code and the pseudocode
link |
00:46:35.580
and the only thing difference was the parentheses.
link |
00:46:38.580
So I said it must be that for some people
link |
00:46:40.580
a certain number of left parentheses
link |
00:46:42.580
shuts off their brain.
link |
00:46:44.580
Yeah, it's very possible in that sense
link |
00:46:46.580
and Python just goes the other way.
link |
00:46:48.580
So that was the point at which I said,
link |
00:46:50.580
okay, can't have only Lisp as a language
link |
00:46:53.580
because I don't want to,
link |
00:46:55.580
you only got 10 or 12 or 15 weeks
link |
00:46:58.580
to whatever it is to teach AI
link |
00:47:00.580
and I don't want to waste two weeks of that teaching Lisp.
link |
00:47:02.580
So I said, I got to have another language.
link |
00:47:04.580
Java was the most popular language at the time.
link |
00:47:06.580
I started doing that and then I said,
link |
00:47:08.580
it's really hard to have a one to one correspondence
link |
00:47:12.580
between the pseudocode and the Java
link |
00:47:14.580
because Java is so verbose.
link |
00:47:16.580
So then I said, I'm going to do a survey
link |
00:47:18.580
and find the language that's most like my pseudocode
link |
00:47:22.580
and turned out Python basically was my pseudocode.
link |
00:47:25.580
Somehow I had channeled Guido
link |
00:47:29.580
and designed a pseudocode that was the same as Python,
link |
00:47:32.580
although I hadn't heard of Python at that point.
link |
00:47:35.580
And from then on, that's what I've been using
link |
00:47:38.580
because it's been a good match.
link |
00:47:40.580
So what's the story in Python behind Pytudes?
link |
00:47:45.580
You're a GitHub repository with puzzles and exercises
link |
00:47:48.580
and Python is pretty fun.
link |
00:47:50.580
Yeah, it seems like fun.
link |
00:47:52.580
You know, I like doing puzzles and I like being an educator.
link |
00:47:57.580
I did a class with Udacity, Udacity 2.12.
link |
00:48:01.580
I think it was, it was basically problem solving,
link |
00:48:05.580
using Python and looking at different problems.
link |
00:48:08.580
Does Pytudes feed that class in terms of the exercises?
link |
00:48:11.580
I was wondering what the...
link |
00:48:12.580
Yeah, so the class came first.
link |
00:48:14.580
Some of the stuff that's in Pytudes
link |
00:48:16.580
was write ups of what was in the class
link |
00:48:18.580
and then some of it was just continuing to work on new problems.
link |
00:48:23.580
So what's the organizing madness of Pytudes?
link |
00:48:26.580
Is it just a collection of cool exercises?
link |
00:48:29.580
Just whatever I thought was fun.
link |
00:48:31.580
Okay, awesome.
link |
00:48:32.580
So you were the director of search quality at Google
link |
00:48:35.580
from 2001 to 2005 in the early days
link |
00:48:39.580
when there were just a few employees
link |
00:48:41.580
and when the company was growing like crazy.
link |
00:48:45.580
So, I mean, Google revolutionized the way we discover,
link |
00:48:51.580
share and aggregate knowledge.
link |
00:48:54.580
So this is one of the fundamental aspects of civilization,
link |
00:49:00.580
is information being shared
link |
00:49:02.580
and there's different mechanisms throughout history
link |
00:49:04.580
but Google has just 10x improved that.
link |
00:49:07.580
And you're a part of that, people discovering that information.
link |
00:49:11.580
So what were some of the challenges
link |
00:49:13.580
on the philosophical or the technical level in those early days?
link |
00:49:17.580
It definitely was an exciting time
link |
00:49:19.580
and as you say, we were doubling in size every year
link |
00:49:23.580
and the challenges were we wanted to get the right answers.
link |
00:49:28.580
And we had to figure out what that meant.
link |
00:49:32.580
We had to implement that and we had to make it all efficient
link |
00:49:36.580
and we had to keep on testing
link |
00:49:41.580
and seeing if we were delivering good answers.
link |
00:49:43.580
And now when you say good answers,
link |
00:49:45.580
it means whatever people are typing in in terms of keywords
link |
00:49:48.580
in terms of that kind of thing,
link |
00:49:50.580
that the results they get are ordered by the desirability
link |
00:49:54.580
for them of those results.
link |
00:49:56.580
The first thing they click on will likely be the thing
link |
00:49:59.580
that they were actually looking for.
link |
00:50:01.580
Right, one of the metrics we had was focused on the first thing.
link |
00:50:04.580
Some of it was focused on the whole page,
link |
00:50:07.580
some of it was focused on top three or so.
link |
00:50:11.580
So we looked at a lot of different metrics
link |
00:50:13.580
for how well we were doing
link |
00:50:15.580
and we broke it down into subclasses of, you know,
link |
00:50:19.580
maybe here's a type of query that we're not doing well on
link |
00:50:23.580
and we try to fix that.
link |
00:50:25.580
Early on, we started to realize
link |
00:50:27.580
that we were in an adversarial position.
link |
00:50:30.580
So we started thinking,
link |
00:50:32.580
well, we're kind of like the card catalog in the library.
link |
00:50:36.580
The books are here and we're off to the side
link |
00:50:39.580
and we're just reflecting what's there.
link |
00:50:42.580
And then we realized every time we make a change,
link |
00:50:45.580
the webmasters make a change.
link |
00:50:47.580
And it's game theoretic.
link |
00:50:49.580
And so we had to think not only is this the right move
link |
00:50:53.580
for us to make now,
link |
00:50:55.580
but also if we make this move,
link |
00:50:57.580
what's the counter move going to be?
link |
00:50:59.580
Is that going to get us into a worse place,
link |
00:51:01.580
in which case we won't make that move,
link |
00:51:03.580
we'll make a different move.
link |
00:51:05.580
And did you find, I mean, I assume,
link |
00:51:07.580
with the popularity and the growth of the internet
link |
00:51:09.580
that people were creating new content,
link |
00:51:11.580
so you're almost helping guide the creation of content?
link |
00:51:14.580
Yeah, so that's certainly true, right?
link |
00:51:16.580
So we definitely changed the structure of the network, right?
link |
00:51:21.580
So if you think back, you know, in the very early days,
link |
00:51:25.580
Larry and Sergey had the PageRank paper
link |
00:51:28.580
and John Kleinberg had this hubs and authorities model,
link |
00:51:33.580
which says the web is made out of these hubs,
link |
00:51:38.580
which will be my page of cool links about dogs or whatever,
link |
00:51:44.580
and people would just list links.
link |
00:51:46.580
And then there'd be authorities which were the ones,
link |
00:51:49.580
the page about dogs that most people linked to.
link |
00:51:52.580
That doesn't happen anymore.
link |
00:51:54.580
People don't bother to say my page of cool links
link |
00:51:57.580
because we took over that function, right?
link |
00:51:59.580
So we changed the way that worked.
link |
00:52:03.580
Did you imagine back then that the internet
link |
00:52:05.580
would be as massively vibrant as it is today?
link |
00:52:08.580
I mean, it was already growing quickly,
link |
00:52:10.580
but it's just another...
link |
00:52:12.580
I don't know if you've ever...
link |
00:52:14.580
Today, if you sit back and just look at the internet
link |
00:52:17.580
with wonder, the amount of content
link |
00:52:20.580
that's just constantly being created,
link |
00:52:22.580
constantly being shared and employed.
link |
00:52:24.580
Yeah, it's always been surprising to me.
link |
00:52:27.580
Because I'm not very good at predicting the future.
link |
00:52:30.580
And I remember being a graduate student in 1980 or so,
link |
00:52:35.580
and we had the ARPANET,
link |
00:52:39.580
and then there was this proposal to commercialize it
link |
00:52:44.580
and have this internet and this crazy Senator Gore
link |
00:52:49.580
thought that might be a good idea.
link |
00:52:51.580
And I remember thinking, oh, come on.
link |
00:52:53.580
You can't expect a commercial company
link |
00:52:55.580
to understand this technology.
link |
00:52:57.580
They'll never be able to do it.
link |
00:52:59.580
Yeah, okay, we can have this.com domain,
link |
00:53:01.580
but it won't go anywhere.
link |
00:53:03.580
So I was wrong, Al Gore was right.
link |
00:53:05.580
At the same time, the nature of what it means
link |
00:53:07.580
to be a commercial company has changed, too.
link |
00:53:09.580
So Google, many ways, at its founding,
link |
00:53:12.580
is different than what companies were before, I think.
link |
00:53:16.580
Right, so there's all these business models
link |
00:53:19.580
that are so different than what was possible back then.
link |
00:53:22.580
So in terms of predicting the future,
link |
00:53:24.580
what do you think it takes to build a system
link |
00:53:26.580
that approaches human level intelligence?
link |
00:53:29.580
You've talked about, of course,
link |
00:53:31.580
that we shouldn't be so obsessed
link |
00:53:33.580
about creating human level intelligence.
link |
00:53:35.580
We just create systems that are very useful for humans.
link |
00:53:38.580
But what do you think it takes to approach that level?
link |
00:53:44.580
Right, so certainly I don't think human level intelligence
link |
00:53:47.580
is one thing, right?
link |
00:53:49.580
So I think there's lots of different tasks,
link |
00:53:51.580
lots of different capabilities.
link |
00:53:53.580
I also don't think that should be the goal, right?
link |
00:53:56.580
So I wouldn't want to create a calculator
link |
00:54:01.580
that could do multiplication at human level.
link |
00:54:03.580
That would be a step backwards.
link |
00:54:05.580
And so for many things,
link |
00:54:07.580
we should be aiming far beyond human level.
link |
00:54:09.580
For other things,
link |
00:54:11.580
maybe human level is a good level to aim at.
link |
00:54:14.580
And for others, we'd say,
link |
00:54:16.580
well, let's not bother doing this,
link |
00:54:18.580
because we already have humans who can take on those tasks.
link |
00:54:21.580
So as you say, I'd like to focus on what's a useful tool.
link |
00:54:25.580
And in some cases,
link |
00:54:28.580
being at human level is an important part
link |
00:54:30.580
of crossing that threshold to make the tool useful.
link |
00:54:33.580
So we see in things like these personal assistants now
link |
00:54:38.580
that you get either on your phone
link |
00:54:40.580
or on a speaker that sits on the table,
link |
00:54:43.580
you want to be able to have a conversation with those.
link |
00:54:46.580
And I think as an industry,
link |
00:54:49.580
we haven't quite figured out what the right model is
link |
00:54:51.580
for what these things can do.
link |
00:54:53.580
And we're aiming towards,
link |
00:54:55.580
well, you just have a conversation with them
link |
00:54:57.580
the way you can with a person.
link |
00:54:59.580
But we haven't delivered on that model yet, right?
link |
00:55:02.580
So you can ask it, what's the weather?
link |
00:55:04.580
You can ask it, play some nice songs
link |
00:55:08.580
and five or six other things,
link |
00:55:11.580
and then you run out of stuff that it can do.
link |
00:55:13.580
In terms of a deep, meaningful connection.
link |
00:55:15.580
So you've mentioned the movie Her
link |
00:55:17.580
as one of your favorite A.I. movies.
link |
00:55:19.580
Do you think it's possible for a human being
link |
00:55:21.580
to fall in love with an A.I. system,
link |
00:55:23.580
A.I. assistant, as you mentioned,
link |
00:55:25.580
to have taken this big leap from what's the weather
link |
00:55:28.580
to having a deep connection?
link |
00:55:30.580
Yeah, I think as people,
link |
00:55:33.580
that's what we love to do.
link |
00:55:35.580
And I was at a showing of Her
link |
00:55:38.580
where we had a panel discussion
link |
00:55:40.580
and somebody asked me,
link |
00:55:42.580
what other movie do you think Her is similar to?
link |
00:55:45.580
And my answer was Life of Brian,
link |
00:55:48.580
which is not a science fiction movie,
link |
00:55:51.580
but both movies are about wanting to believe
link |
00:55:55.580
in something that's not necessarily real.
link |
00:55:58.580
Yeah, by the way, for people who don't know,
link |
00:56:00.580
it's Monty Python.
link |
00:56:01.580
Yeah, it's brilliantly put.
link |
00:56:03.580
Right?
link |
00:56:04.580
So I mean, I think that's just the way we are.
link |
00:56:06.580
We want to trust, we want to believe,
link |
00:56:09.580
we want to fall in love,
link |
00:56:11.580
and it doesn't necessarily take that much, right?
link |
00:56:14.580
So, you know, my kids fell in love with their teddy bear,
link |
00:56:18.580
and the teddy bear was not very interactive, right?
link |
00:56:21.580
So that's all us pushing our feelings
link |
00:56:25.580
onto our devices and our things,
link |
00:56:28.580
and I think that that's what we like to do,
link |
00:56:30.580
so we'll continue to do that.
link |
00:56:32.580
So yeah, as human beings, we long for that connection,
link |
00:56:35.580
and just A.I. has to do a little bit of work
link |
00:56:38.580
to catch us on the other end.
link |
00:56:40.580
Yeah, and certainly, you know,
link |
00:56:42.580
if you can get to dog level,
link |
00:56:45.580
a lot of people have invested a lot of love in their pets.
link |
00:56:48.580
In their pets.
link |
00:56:49.580
Some people, as I've been told,
link |
00:56:52.580
in working with autonomous vehicles,
link |
00:56:54.580
have invested a lot of love into their inanimate cars,
link |
00:56:57.580
so it really doesn't take much.
link |
00:57:00.580
So what is a good test to linger on a topic
link |
00:57:04.580
that may be silly or a little bit philosophical?
link |
00:57:07.580
What is a good test of intelligence in your view?
link |
00:57:11.580
Is natural conversation, like in the touring test,
link |
00:57:14.580
a good test, put another way,
link |
00:57:17.580
what would impress you if you saw a computer do it these days?
link |
00:57:22.580
Yeah, I mean, I get impressed all the time.
link |
00:57:24.580
Right, so, you know, go playing,
link |
00:57:29.580
Starcraft playing, those are all pretty cool.
link |
00:57:33.580
You know, and I think, sure,
link |
00:57:37.580
conversation is important.
link |
00:57:39.580
I think, you know,
link |
00:57:43.580
we sometimes have these tests where it's easy to fool the system,
link |
00:57:46.580
where you can have a chat bot that can have a conversation,
link |
00:57:51.580
but it never gets into a situation where it has to be deep enough
link |
00:57:55.580
that it really reveals itself as being intelligent or not.
link |
00:58:00.580
I think, you know, Turing suggested that,
link |
00:58:05.580
but I think if he were alive, he'd say,
link |
00:58:07.580
you know, I didn't really mean that seriously.
link |
00:58:10.580
And I think, you know, this is just my opinion,
link |
00:58:14.580
but I think Turing's point was not that this test of conversation
link |
00:58:19.580
is a good test.
link |
00:58:21.580
I think his point was having a test is the right thing.
link |
00:58:24.580
So rather than having the philosopher say,
link |
00:58:27.580
oh, no, AI is impossible,
link |
00:58:29.580
you should say, well, we'll just have a test.
link |
00:58:31.580
And then the result of that will tell us the answer
link |
00:58:34.580
and doesn't necessarily have to be a conversation test.
link |
00:58:36.580
That's right.
link |
00:58:37.580
And coming up with you,
link |
00:58:38.580
but a test as a technology evolves is probably the right way.
link |
00:58:41.580
Do you worry, as a lot of the general public does about,
link |
00:58:46.580
not a lot, but some vocal part of the general public
link |
00:58:50.580
about the existential threat of artificial intelligence?
link |
00:58:53.580
So looking farther into the future, as you said,
link |
00:58:56.580
most of us are not able to predict much.
link |
00:58:58.580
So when shrouded in such mystery, there's a concern of,
link |
00:59:01.580
well, you start to think about worst case.
link |
00:59:04.580
Is that something that occupies your mind space much?
link |
00:59:08.580
So I certainly think about threats.
link |
00:59:10.580
I think about dangers.
link |
00:59:13.580
And I think any new technology has positives and negatives.
link |
00:59:19.580
And if it's a powerful technology,
link |
00:59:21.580
it can be used for bad as well as for good.
link |
00:59:24.580
So I'm certainly not worried about the robot apocalypse
link |
00:59:28.580
and the Terminator type scenarios.
link |
00:59:32.580
I am worried about change in employment.
link |
00:59:37.580
And are we going to be able to react fast enough to deal with that?
link |
00:59:41.580
I think we're already seeing it today where a lot of people
link |
00:59:44.580
are disgruntled about the way income inequality is working.
link |
00:59:50.580
And automation could help accelerate those kinds of problems.
link |
00:59:55.580
I see powerful technologies can always be used as weapons,
link |
00:59:59.580
whether they're robots or drones or whatever.
link |
01:00:03.580
Some of that we're seeing in due to AI.
link |
01:00:06.580
A lot of it, you don't need AI.
link |
01:00:09.580
And I don't know what's a worst threat.
link |
01:00:12.580
If it's an autonomous drone or it's a CRISPR technology
link |
01:00:17.580
becoming available or we have lots of threats to face
link |
01:00:21.580
and some of them involve AI and some of them don't.
link |
01:00:24.580
So the threats that technology presents
link |
01:00:27.580
are, for the most part, optimistic about technology
link |
01:00:31.580
also alleviating those threats or creating new opportunities
link |
01:00:34.580
or protecting us from the more detrimental effects of these.
link |
01:00:38.580
Yeah, I don't know.
link |
01:00:39.580
Again, it's hard to predict the future.
link |
01:00:41.580
And as a society so far,
link |
01:00:45.580
we've survived nuclear bombs and other things.
link |
01:00:50.580
Of course, only societies that have survived
link |
01:00:53.580
are having this conversation.
link |
01:00:55.580
Maybe that's survivorship bias there.
link |
01:00:58.580
What problem stands out to you as exciting, challenging,
link |
01:01:02.580
impactful to work on in the near future
link |
01:01:05.580
for yourself, for the community, and broadly?
link |
01:01:08.580
So we talked about these assistance and conversation.
link |
01:01:12.580
I think that's a great area.
link |
01:01:14.580
I think combining common sense reasoning
link |
01:01:20.580
with the power of data is a great area.
link |
01:01:25.580
In which application?
link |
01:01:27.580
In conversational issues, just broadly speaking?
link |
01:01:29.580
Just in general, yeah.
link |
01:01:31.580
As a programmer, I'm interested in programming tools,
link |
01:01:35.580
both in terms of the current systems we have today
link |
01:01:39.580
with TensorFlow and so on.
link |
01:01:41.580
Can we make them much easier to use for a broader class of people?
link |
01:01:45.580
And also, can we apply machine learning to the more traditional
link |
01:01:50.580
type of programming?
link |
01:01:51.580
So when you go to Google and you type in a query
link |
01:01:56.580
and you spell something wrong, it says, did you mean?
link |
01:01:59.580
And the reason we're able to do that is because lots of other people
link |
01:02:02.580
made a similar error and then they corrected it.
link |
01:02:05.580
We should be able to go into our code bases
link |
01:02:08.580
and our bugfix spaces.
link |
01:02:10.580
And when I type a line of code, it should be able to say,
link |
01:02:13.580
did you mean such and such?
link |
01:02:15.580
If you type this today, you're probably going to type in this bugfix tomorrow.
link |
01:02:20.580
Yeah, that's a really exciting application of almost an assistant
link |
01:02:25.580
for the coding program experience at every level.
link |
01:02:29.580
So I think I could safely speak for the entire AI community.
link |
01:02:34.580
First of all, thank you for the amazing work you've done.
link |
01:02:38.580
Certainly for the amazing work you've done with AI, a modern approach book.
link |
01:02:42.580
I think we're all looking forward very much for the fourth edition
link |
01:02:45.580
and then the fifth edition and so on.
link |
01:02:47.580
So Peter, thank you so much for talking today.
link |
01:02:50.580
Yeah, thank you.