back to index

Peter Norvig: Artificial Intelligence: A Modern Approach | Lex Fridman Podcast #42


small model | large model

link |
00:00:00.000
The following is a conversation with Peter Norvig.
link |
00:00:02.800
He's the Director of Research at Google
link |
00:00:05.000
and the coauthor with Stuart Russell of the book
link |
00:00:07.880
Artificial Intelligence, A Modern Approach,
link |
00:00:10.640
that educated and inspired a whole generation
link |
00:00:13.680
of researchers, including myself,
link |
00:00:15.640
to get into the field of artificial intelligence.
link |
00:00:18.840
This is the Artificial Intelligence Podcast.
link |
00:00:21.720
If you enjoy it, subscribe on YouTube,
link |
00:00:24.120
give five stars on iTunes, support on Patreon,
link |
00:00:27.160
or simply connect with me on Twitter.
link |
00:00:29.040
I'm Lex Friedman, spelled F R I D M A N.
link |
00:00:32.800
And now, here's my conversation with Peter Norvig.
link |
00:00:37.680
Most researchers in the AI community, including myself,
link |
00:00:40.800
own all three editions, red, green, and blue,
link |
00:00:43.040
of the Artificial Intelligence, A Modern Approach.
link |
00:00:46.400
It's a field defining textbook, as many people are aware,
link |
00:00:49.360
that you wrote with Stuart Russell.
link |
00:00:52.120
How has the book changed and how have you changed
link |
00:00:55.320
in relation to it from the first edition
link |
00:00:57.200
to the second to the third and now fourth edition
link |
00:01:00.040
as you work on it?
link |
00:01:00.880
Yeah, so it's been a lot of years, a lot of changes.
link |
00:01:04.280
One of the things changing from the first
link |
00:01:05.960
to maybe the second or third
link |
00:01:09.480
was just the rise of computing power, right?
link |
00:01:12.920
So I think in the first edition, we said,
link |
00:01:17.720
here's predicate logic, but that only goes so far
link |
00:01:22.520
because pretty soon you have millions of short little
link |
00:01:27.520
predicate expressions and they can possibly fit in memory.
link |
00:01:31.480
So we're gonna use first order logic that's more concise.
link |
00:01:35.720
And then we quickly realized,
link |
00:01:38.000
oh, predicate logic is pretty nice
link |
00:01:40.400
because there are really fast SAT solvers and other things.
link |
00:01:44.200
And look, there's only millions of expressions
link |
00:01:46.320
and that fits easily into memory,
link |
00:01:48.280
or maybe even billions fit into memory now.
link |
00:01:51.200
So that was a change of the type of technology we needed
link |
00:01:54.560
just because the hardware expanded.
link |
00:01:56.720
Even to the second edition,
link |
00:01:58.200
resource constraints were loosened significantly
link |
00:02:00.720
for the second.
link |
00:02:01.880
And that was early 2000s second edition.
link |
00:02:04.880
Right, so 95 was the first and then 2000, 2001 or so.
link |
00:02:10.520
And then moving on from there,
link |
00:02:12.280
I think we're starting to see that again with the GPUs
link |
00:02:17.040
and then more specific type of machinery
link |
00:02:20.640
like the TPUs and you're seeing custom ASICs and so on
link |
00:02:25.440
for deep learning.
link |
00:02:26.280
So we're seeing another advance in terms of the hardware.
link |
00:02:30.520
Then I think another thing that we especially noticed
link |
00:02:33.640
this time around is in all three of the first editions,
link |
00:02:37.160
we kind of said, well, we're gonna find AI
link |
00:02:40.200
as maximizing expected utility
link |
00:02:43.000
and you tell me your utility function.
link |
00:02:45.520
And now we've got 27 chapters where the cool techniques
link |
00:02:49.560
for how to optimize that.
link |
00:02:51.840
I think in this edition, we're saying more,
link |
00:02:54.080
you know what, maybe that optimization part
link |
00:02:56.880
is the easy part and the hard part is deciding
link |
00:02:59.920
what is my utility function?
link |
00:03:01.640
What do I want?
link |
00:03:03.040
And if I'm a collection of agents or a society,
link |
00:03:06.360
what do we want as a whole?
link |
00:03:08.400
So you touched that topic in this edition.
link |
00:03:10.120
You get a little bit more into utility.
link |
00:03:11.960
Yeah.
link |
00:03:12.800
That's really interesting.
link |
00:03:13.640
On a technical level,
link |
00:03:15.480
we're almost pushing the philosophical.
link |
00:03:17.560
I guess it is philosophical, right?
link |
00:03:19.320
So we've always had a philosophy chapter,
link |
00:03:21.640
which I was glad that we were supporting.
link |
00:03:27.360
And now it's less kind of the Chinese room type argument
link |
00:03:33.000
and more of these ethical and societal type issues.
link |
00:03:37.560
So we get into the issues of fairness and bias
link |
00:03:41.920
and just the issue of aggregating utilities.
link |
00:03:45.960
So how do you encode human values into a utility function?
link |
00:03:49.800
Is this something that you can do purely through data
link |
00:03:53.520
in a learned way or is there some systematic,
link |
00:03:56.840
obviously there's no good answers yet.
link |
00:03:58.560
There's just beginnings to this,
link |
00:04:01.560
to even opening the doors to these questions.
link |
00:04:02.880
So there is no one answer.
link |
00:04:04.320
Yes, there are techniques to try to learn that.
link |
00:04:07.520
So we talk about inverse reinforcement learning, right?
link |
00:04:10.800
So reinforcement learning, you take some actions,
link |
00:04:14.120
you get some rewards and you figure out
link |
00:04:16.200
what actions you should take.
link |
00:04:18.000
And inverse reinforcement learning,
link |
00:04:20.160
you observe somebody taking actions and you figure out,
link |
00:04:24.520
well, this must be what they were trying to do.
link |
00:04:27.240
If they did this action, it must be because they want it.
link |
00:04:30.360
Of course, there's restrictions to that, right?
link |
00:04:33.000
So lots of people take actions that are self destructive
link |
00:04:37.120
or they're suboptimal in certain ways.
link |
00:04:39.200
So you don't wanna learn that.
link |
00:04:40.640
You wanna somehow learn the perfect actions
link |
00:04:44.800
rather than the ones they actually take.
link |
00:04:46.480
So that's a challenge for that field.
link |
00:04:51.360
Then another big part of it is just kind of theoretical
link |
00:04:55.800
of saying, what can we accomplish?
link |
00:04:58.720
And so you look at like this work on the programs
link |
00:05:04.480
to predict recidivism and decide who should get parole
link |
00:05:09.480
or who should get bail or whatever.
link |
00:05:12.240
And how are you gonna evaluate that?
link |
00:05:13.960
And one of the big issues is fairness
link |
00:05:16.880
across protected classes.
link |
00:05:18.960
Protected classes being things like sex and race and so on.
link |
00:05:23.960
And so two things you want is you wanna say,
link |
00:05:27.840
well, if I get a score of say six out of 10,
link |
00:05:32.000
then I want that to mean the same
link |
00:05:34.320
whether no matter what race I'm on, right?
link |
00:05:37.040
Yes, right, so I wanna have a 60% chance
link |
00:05:39.840
of reoccurring regardless.
link |
00:05:44.360
And one of the makers of a commercial program to do that
link |
00:05:48.560
says that's what we're trying to optimize
link |
00:05:50.040
and look, we achieved that.
link |
00:05:51.280
We've reached that kind of balance.
link |
00:05:56.120
And then on the other side,
link |
00:05:57.520
you also wanna say, well, if it makes mistakes,
link |
00:06:01.840
I want that to affect both sides
link |
00:06:04.680
of the protected class equally.
link |
00:06:07.240
And it turns out they don't do that, right?
link |
00:06:09.000
So they're twice as likely to make a mistake
link |
00:06:12.160
that would harm a black person over a white person.
link |
00:06:14.800
So that seems unfair.
link |
00:06:16.480
So you'd like to say,
link |
00:06:17.320
well, I wanna achieve both those goals.
link |
00:06:19.600
And then it turns out you do the analysis
link |
00:06:21.360
and it's theoretically impossible
link |
00:06:22.960
to achieve both those goals.
link |
00:06:24.120
So you have to trade them off one against the other.
link |
00:06:27.080
So that analysis is really helpful
link |
00:06:29.040
to know what you can aim for and how much you can get.
link |
00:06:32.360
You can't have everything.
link |
00:06:33.920
But the analysis certainly can't tell you
link |
00:06:35.480
where should we make that trade off point.
link |
00:06:38.440
But nevertheless, then we can as humans deliberate
link |
00:06:41.960
where that trade off should be.
link |
00:06:43.120
Yeah, so at least we now we're arguing in an informed way.
link |
00:06:45.840
We're not asking for something impossible.
link |
00:06:48.240
We're saying, here's where we are
link |
00:06:50.040
and here's what we aim for.
link |
00:06:51.720
And this strategy is better than that strategy.
link |
00:06:55.840
So that's, I would argue is a really powerful
link |
00:06:58.880
and really important first step,
link |
00:07:00.560
but it's a doable one sort of removing
link |
00:07:02.800
undesirable degrees of bias in systems
link |
00:07:07.560
in terms of protected classes.
link |
00:07:08.920
And then there's something I listened
link |
00:07:10.120
to your commencement speech,
link |
00:07:12.480
or there's some fuzzier things like,
link |
00:07:15.560
you mentioned angry birds.
link |
00:07:17.640
Do you wanna create systems that feed the dopamine enjoyment
link |
00:07:23.040
that feed, that optimize for you returning to the system,
link |
00:07:26.720
enjoying the moment of playing the game of getting likes
link |
00:07:30.480
or whatever, this kind of thing,
link |
00:07:32.000
or some kind of longterm improvement?
link |
00:07:34.800
Right.
link |
00:07:36.040
Are you even thinking about that?
link |
00:07:39.600
That's really going to the philosophical area.
link |
00:07:43.200
No, I think that's a really important issue too.
link |
00:07:45.720
Certainly thinking about that.
link |
00:07:46.760
I don't think about that as an AI issue as much.
link |
00:07:52.240
But as you say, the point is we've built this society
link |
00:07:57.240
and this infrastructure where we say we have a marketplace
link |
00:08:02.240
for attention and we've decided as a society
link |
00:08:07.240
that we like things that are free.
link |
00:08:09.360
And so we want all the apps on our phone to be free.
link |
00:08:13.160
And that means they're all competing for your attention.
link |
00:08:15.360
And then eventually they make some money some way
link |
00:08:17.880
through ads or in game sales or whatever.
link |
00:08:22.400
But they can only win by defeating all the other apps
link |
00:08:26.560
by instilling your attention.
link |
00:08:28.680
And we build a marketplace where it seems like
link |
00:08:34.320
they're working against you rather than working with you.
link |
00:08:38.320
And I'd like to find a way where we can change
link |
00:08:41.120
the playing field so you feel more like,
link |
00:08:43.200
well, these things are on my side.
link |
00:08:46.040
Yes, they're letting me have some fun in the short term,
link |
00:08:49.040
but they're also helping me in the long term
link |
00:08:52.520
rather than competing against me.
link |
00:08:54.280
And those aren't necessarily conflicting objectives.
link |
00:08:56.680
They're just the incentives, the direct current incentives
link |
00:09:00.760
as we try to figure out this whole new world
link |
00:09:02.720
seem to be on the easier part of that,
link |
00:09:06.120
which is feeding the dopamine, the rush.
link |
00:09:08.720
Right.
link |
00:09:09.560
But so maybe taking a quick step back at the beginning
link |
00:09:15.960
of the Artificial Intelligence,
link |
00:09:17.480
the Modern Approach book of writing.
link |
00:09:19.640
So here you are in the 90s.
link |
00:09:21.760
When you first sat down with Stuart to write the book
link |
00:09:25.720
to cover an entire field,
link |
00:09:27.840
which is one of the only books that's successfully done that
link |
00:09:30.600
for AI and actually in a lot of other computer science
link |
00:09:33.720
fields, it's a huge undertaking.
link |
00:09:37.400
So it must've been quite daunting.
link |
00:09:40.840
What was that process like?
link |
00:09:42.120
Did you envision that you would be trying to cover
link |
00:09:44.960
the entire field?
link |
00:09:47.280
Was there a systematic approach to it
link |
00:09:48.840
that was more step by step?
link |
00:09:50.360
How was, how did it feel?
link |
00:09:52.200
So I guess it came about,
link |
00:09:54.440
go to lunch with the other AI faculty at Berkeley
link |
00:09:57.440
and we'd say, the field is changing.
link |
00:10:00.760
It seems like the current books are a little bit behind.
link |
00:10:03.680
Nobody's come out with a new book recently.
link |
00:10:05.280
We should do that.
link |
00:10:06.880
And everybody said, yeah, yeah, that's a great thing to do.
link |
00:10:09.120
And we never did anything.
link |
00:10:10.120
Right.
link |
00:10:11.120
And then I ended up heading off to industry.
link |
00:10:14.400
I went to Sun Labs.
link |
00:10:16.000
So I thought, well, that's the end of my possible
link |
00:10:19.000
academic publishing career.
link |
00:10:21.840
But I met Stuart again at a conference like a year later
link |
00:10:25.280
and said, you know that book we were always talking about,
link |
00:10:28.240
you guys must be half done with it by now, right?
link |
00:10:30.400
And he said, well, we keep talking, we never do anything.
link |
00:10:34.160
So I said, well, you know, we should do it.
link |
00:10:36.120
And I think the reason is that we all felt
link |
00:10:40.600
it was a time where the field was changing.
link |
00:10:44.640
And that was in two ways.
link |
00:10:46.640
So, you know, the good old fashioned AI
link |
00:10:49.080
was based primarily on Boolean logic.
link |
00:10:52.160
And you had a few tricks to deal with uncertainty.
link |
00:10:55.680
And it was based primarily on knowledge engineering.
link |
00:10:59.040
That the way you got something done is you went out,
link |
00:11:00.920
you interviewed an expert and you wrote down by hand
link |
00:11:03.600
everything they knew.
link |
00:11:05.520
And we saw in 95 that the field was changing in two ways.
link |
00:11:10.520
One, we're moving more towards probability
link |
00:11:13.760
rather than Boolean logic.
link |
00:11:15.240
And we're moving more towards machine learning
link |
00:11:17.640
rather than knowledge engineering.
link |
00:11:20.440
And the other books hadn't caught that way
link |
00:11:22.920
if they were still in the, more in the old school.
link |
00:11:26.680
Although, so certainly they had part of that on the way.
link |
00:11:29.920
But we said, if we start now completely taking
link |
00:11:33.600
that point of view, we can have a different kind of book.
link |
00:11:36.640
And we were able to put that together.
link |
00:11:39.920
And what was literally the process if you remember,
link |
00:11:44.200
did you start writing a chapter?
link |
00:11:46.800
Did you outline?
link |
00:11:48.680
Yeah, I guess we did an outline
link |
00:11:50.640
and then we sort of assigned chapters to each person.
link |
00:11:55.960
At the time I had moved to Boston
link |
00:11:58.200
and Stuart was in Berkeley.
link |
00:12:00.080
So basically we did it over the internet.
link |
00:12:04.440
And, you know, that wasn't the same as doing it today.
link |
00:12:08.000
It meant, you know, dial up lines and telnetting in.
link |
00:12:13.000
And, you know, you telnet it into one shell
link |
00:12:19.320
and you type cat file name
link |
00:12:21.040
and you hoped it was captured at the other end.
link |
00:12:23.840
And certainly you're not sending images
link |
00:12:26.120
and figures back and forth.
link |
00:12:27.200
Right, right, that didn't work.
link |
00:12:29.640
But, you know, did you anticipate
link |
00:12:31.440
where the field would go from that day, from the 90s?
link |
00:12:37.680
Did you see the growth into learning based methods
link |
00:12:42.680
and to data driven methods
link |
00:12:44.640
that followed in the future decades?
link |
00:12:47.040
We certainly thought that learning was important.
link |
00:12:51.960
I guess we missed it as being as important as it is today.
link |
00:12:58.040
We missed this idea of big data.
link |
00:13:00.080
We missed that the idea of deep learning
link |
00:13:02.760
hadn't been invented yet.
link |
00:13:04.440
We could have taken the book
link |
00:13:07.480
from a complete machine learning point of view
link |
00:13:11.160
right from the start.
link |
00:13:12.400
We chose to do it more from a point of view
link |
00:13:15.080
of we're gonna first develop
link |
00:13:16.920
different types of representations.
link |
00:13:19.120
And we're gonna talk about different types of environments.
link |
00:13:24.000
Is it fully observable or partially observable?
link |
00:13:26.600
And is it deterministic or stochastic and so on?
link |
00:13:29.720
And we made it more complex along those axes
link |
00:13:33.360
rather than focusing on the machine learning axis first.
link |
00:13:38.000
Do you think, you know, there's some sense
link |
00:13:40.000
in which the deep learning craze is extremely successful
link |
00:13:44.160
for a particular set of problems.
link |
00:13:46.320
And, you know, eventually it's going to,
link |
00:13:49.360
in the general case, hit challenges.
link |
00:13:52.520
So in terms of the difference between perception systems
link |
00:13:56.280
and robots that have to act in the world,
link |
00:13:59.000
do you think we're gonna return
link |
00:14:01.360
to AI modern approach type breadth
link |
00:14:06.200
in addition five and six?
link |
00:14:08.760
In future decades, do you think deep learning
link |
00:14:12.360
will take its place as a chapter
link |
00:14:14.080
in this bigger view of AI?
link |
00:14:17.920
Yeah, I think we don't know yet
link |
00:14:19.320
how it's all gonna play out.
link |
00:14:21.080
So in the new edition, we have a chapter on deep learning.
link |
00:14:26.240
We got Ian Goodfellow to be the guest author
link |
00:14:29.480
for that chapter.
link |
00:14:30.600
So he said he could condense his whole deep learning book
link |
00:14:34.800
into one chapter.
link |
00:14:35.960
I think he did a great job.
link |
00:14:38.240
We were also encouraged that he's, you know,
link |
00:14:40.560
we gave him the old neural net chapter
link |
00:14:43.600
and said, modernize that.
link |
00:14:47.280
And he said, you know, half of that was okay.
link |
00:14:50.280
That certainly there's lots of new things
link |
00:14:52.960
that have been developed,
link |
00:14:54.000
but some of the core was still the same.
link |
00:14:58.000
So I think we'll gain a better understanding
link |
00:15:02.320
of what you can do there.
link |
00:15:04.240
I think we'll need to incorporate
link |
00:15:07.040
all the things we can do with the other technologies, right?
link |
00:15:10.040
So deep learning started out with convolutional networks
link |
00:15:14.680
and very close to perception.
link |
00:15:18.880
And it's since moved to be able to do more
link |
00:15:23.280
with actions and some degree of longer term planning.
link |
00:15:28.680
But we need to do a better job
link |
00:15:30.160
with representation than reasoning
link |
00:15:32.640
and one shot learning and so on.
link |
00:15:36.280
And I think we don't know yet how that's gonna play out.
link |
00:15:41.120
So do you think looking at some success,
link |
00:15:45.840
but certainly eventual demise,
link |
00:15:49.840
a partial demise of experts
link |
00:15:51.520
to symbolic systems in the 80s,
link |
00:15:54.160
do you think there is kernels of wisdom
link |
00:15:56.560
and the work that was done there
link |
00:15:59.040
with logic and reasoning and so on
link |
00:16:01.080
that will rise again in your view?
link |
00:16:05.700
So certainly I think the idea of representation
link |
00:16:08.640
and reasoning is crucial
link |
00:16:10.360
that sometimes you just don't have enough data
link |
00:16:13.980
about the world to learn de novo.
link |
00:16:17.360
So you've got to have some idea of representation,
link |
00:16:22.000
whether that was programmed in or told or whatever,
link |
00:16:24.920
and then be able to take steps of reasoning.
link |
00:16:28.600
I think the problem with the good old fashioned AI
link |
00:16:33.600
was one, we tried to base everything on these symbols
link |
00:16:39.940
that were atomic.
link |
00:16:42.540
And that's great if you're like trying to define
link |
00:16:45.500
the properties of a triangle, right?
link |
00:16:47.580
Because they have necessary and sufficient conditions.
link |
00:16:50.700
But things in the real world don't.
link |
00:16:52.020
The real world is messy and doesn't have sharp edges
link |
00:16:55.260
and atomic symbols do.
link |
00:16:57.380
So that was a poor match.
link |
00:16:59.300
And then the other aspect was that the reasoning
link |
00:17:05.740
was universal and applied anywhere,
link |
00:17:09.740
which in some sense is good,
link |
00:17:11.140
but it also means there's no guidance
link |
00:17:13.260
as to where to apply.
link |
00:17:15.140
And so you started getting these paradoxes
link |
00:17:17.780
like, well, if I have a mountain
link |
00:17:20.640
and I remove one grain of sand,
link |
00:17:22.980
then it's still a mountain.
link |
00:17:25.140
But if I do that repeatedly, at some point it's not, right?
link |
00:17:28.780
And with logic, there's nothing to stop you
link |
00:17:32.300
from applying things repeatedly.
link |
00:17:37.340
But maybe with something like deep learning,
link |
00:17:42.020
and I don't really know what the right name for it is,
link |
00:17:44.660
we could separate out those ideas.
link |
00:17:46.240
So one, we could say a mountain isn't just an atomic notion.
link |
00:17:52.860
It's some sort of something like a word embedding
link |
00:17:56.060
that has a more complex representation.
link |
00:18:02.300
And secondly, we could somehow learn,
link |
00:18:05.080
yeah, there's this rule that you can remove
link |
00:18:06.740
one grain of sand and you can do that a bunch of times,
link |
00:18:09.260
but you can't do it a near infinite amount of times.
link |
00:18:12.860
But on the other hand, when you're doing induction
link |
00:18:15.240
on the integer, sure, then it's fine to do it
link |
00:18:17.260
an infinite number of times.
link |
00:18:18.800
And if we could, somehow we have to learn
link |
00:18:22.180
when these strategies are applicable
link |
00:18:24.660
rather than having the strategies be completely neutral
link |
00:18:28.220
and available everywhere.
link |
00:18:31.220
Anytime you use neural networks,
link |
00:18:32.380
anytime you learn from data,
link |
00:18:34.340
form representation from data in an automated way,
link |
00:18:36.980
it's not very explainable as to,
link |
00:18:41.020
or it's not introspective to us humans
link |
00:18:45.100
in terms of how this neural network sees the world,
link |
00:18:48.180
where, why does it succeed so brilliantly in so many cases
link |
00:18:53.180
and fail so miserably in surprising ways and small.
link |
00:18:56.460
So what do you think is the future there?
link |
00:19:00.980
Can simply more data, better data,
link |
00:19:03.460
more organized data solve that problem?
link |
00:19:06.100
Or is there elements of symbolic systems
link |
00:19:09.280
that need to be brought in
link |
00:19:10.380
which are a little bit more explainable?
link |
00:19:12.140
Yeah, so I prefer to talk about trust
link |
00:19:16.820
and validation and verification
link |
00:19:20.340
rather than just about explainability.
link |
00:19:22.500
And then I think explanations are one tool
link |
00:19:25.300
that you use towards those goals.
link |
00:19:28.900
And I think it is an important issue
link |
00:19:30.660
that we don't wanna use these systems unless we trust them
link |
00:19:33.980
and we wanna understand where they work
link |
00:19:35.500
and where they don't work.
link |
00:19:37.060
And an explanation can be part of that, right?
link |
00:19:40.820
So I apply for a loan and I get denied,
link |
00:19:44.460
I want some explanation of why.
link |
00:19:46.140
And you have, in Europe, we have the GDPR
link |
00:19:50.220
that says you're required to be able to get that.
link |
00:19:53.940
But on the other hand,
link |
00:19:54.860
the explanation alone is not enough, right?
link |
00:19:57.220
So we are used to dealing with people
link |
00:20:01.300
and with organizations and corporations and so on,
link |
00:20:04.820
and they can give you an explanation
link |
00:20:06.260
and you have no guarantee
link |
00:20:07.360
that that explanation relates to reality, right?
link |
00:20:11.220
So the bank can tell me, well, you didn't get the loan
link |
00:20:13.980
because you didn't have enough collateral.
link |
00:20:16.100
And that may be true, or it may be true
link |
00:20:18.240
that they just didn't like my religion or something else.
link |
00:20:22.220
I can't tell from the explanation,
link |
00:20:24.620
and that's true whether the decision was made
link |
00:20:27.660
by a computer or by a person.
link |
00:20:30.940
So I want more.
link |
00:20:33.420
I do wanna have the explanations
link |
00:20:35.060
and I wanna be able to have a conversation
link |
00:20:37.300
to go back and forth and said,
link |
00:20:39.380
well, you gave this explanation, but what about this?
link |
00:20:41.940
And what would have happened if this had happened?
link |
00:20:44.180
And what would I need to change that?
link |
00:20:48.020
So I think a conversation is a better way to think about it
link |
00:20:50.860
than just an explanation as a single output.
link |
00:20:55.300
And I think we need testing of various kinds, right?
link |
00:20:58.040
So in order to know,
link |
00:21:00.740
was the decision really based on my collateral
link |
00:21:03.460
or was it based on my religion or skin color or whatever?
link |
00:21:08.420
I can't tell if I'm only looking at my case,
link |
00:21:10.900
but if I look across all the cases,
link |
00:21:12.940
then I can detect the pattern, right?
link |
00:21:15.620
So you wanna have that kind of capability.
link |
00:21:18.340
You wanna have these adversarial testing, right?
link |
00:21:21.180
So we thought we were doing pretty good
link |
00:21:23.060
at object recognition in images.
link |
00:21:25.860
We said, look, we're at sort of pretty close
link |
00:21:28.500
to human level performance on ImageNet and so on.
link |
00:21:32.300
And then you start seeing these adversarial images
link |
00:21:34.860
and you say, wait a minute,
link |
00:21:36.220
that part is nothing like human performance.
link |
00:21:39.500
You can mess with it really easily.
link |
00:21:40.900
You can mess with it really easily, right?
link |
00:21:42.700
And yeah, you can do that to humans too, right?
link |
00:21:45.500
So we.
link |
00:21:46.340
In a different way perhaps.
link |
00:21:47.180
Right, humans don't know what color the dress was.
link |
00:21:49.500
Right.
link |
00:21:50.540
And so they're vulnerable to certain attacks
link |
00:21:52.460
that are different than the attacks on the machines,
link |
00:21:55.680
but the attacks on the machines are so striking.
link |
00:21:59.420
They really change the way you think
link |
00:22:00.800
about what we've done, right?
link |
00:22:03.060
And the way I think about it is,
link |
00:22:05.660
I think part of the problem is we're seduced
link |
00:22:08.300
by our low dimensional metaphors, right?
link |
00:22:13.660
Yeah.
link |
00:22:14.500
I like that phrase.
link |
00:22:15.700
You look in a textbook and you say,
link |
00:22:18.580
okay, now we've mapped out the space
link |
00:22:20.340
and a cat is here and dog is here
link |
00:22:24.980
and maybe there's a tiny little spot in the middle
link |
00:22:27.540
where you can't tell the difference,
link |
00:22:28.600
but mostly we've got it all covered.
link |
00:22:30.740
And if you believe that metaphor,
link |
00:22:33.300
then you say, well, we're nearly there.
link |
00:22:35.060
And there's only gonna be a couple adversarial images.
link |
00:22:39.220
But I think that's the wrong metaphor
link |
00:22:40.620
and what you should really say is,
link |
00:22:42.300
it's not a 2D flat space that we've got mostly covered.
link |
00:22:45.940
It's a million dimension space
link |
00:22:47.620
and a cat is this string that goes out in this crazy path.
link |
00:22:52.800
And if you step a little bit off the path in any direction,
link |
00:22:55.820
you're in nowhere's land
link |
00:22:57.820
and you don't know what's gonna happen.
link |
00:22:59.420
And so I think that's where we are
link |
00:23:01.160
and now we've got to deal with that.
link |
00:23:03.400
So it wasn't so much an explanation,
link |
00:23:06.180
but it was an understanding of what the models are
link |
00:23:09.980
and what they're doing
link |
00:23:10.800
and now we can start exploring, how do you fix that?
link |
00:23:12.860
Yeah, validating the robustness of the system and so on,
link |
00:23:15.340
but take it back to this word trust.
link |
00:23:20.060
Do you think we're a little too hard on our robots
link |
00:23:22.980
in terms of the standards we apply?
link |
00:23:25.740
So, you know,
link |
00:23:30.580
there's a dance in nonverbal
link |
00:23:34.100
and verbal communication between humans.
link |
00:23:37.100
If we apply the same kind of standard in terms of humans,
link |
00:23:40.740
we trust each other pretty quickly.
link |
00:23:43.940
You know, you and I haven't met before
link |
00:23:45.620
and there's some degree of trust, right?
link |
00:23:48.360
That nothing's gonna go crazy wrong
link |
00:23:50.580
and yet to AI, when we look at AI systems
link |
00:23:53.620
or we seem to approach skepticism always, always.
link |
00:23:58.700
And it's like they have to prove through a lot of hard work
link |
00:24:03.060
that they're even worthy of even inkling of our trust.
link |
00:24:06.700
What do you think about that?
link |
00:24:08.380
How do we break that barrier, close that gap?
link |
00:24:11.180
I think that's right.
link |
00:24:12.020
I think that's a big issue.
link |
00:24:13.780
Just listening, my friend Mark Moffat is a naturalist
link |
00:24:18.780
and he says, the most amazing thing about humans
link |
00:24:22.220
is that you can walk into a coffee shop
link |
00:24:25.120
or a busy street in a city
link |
00:24:28.500
and there's lots of people around you
link |
00:24:30.460
that you've never met before and you don't kill each other.
link |
00:24:34.100
Yeah.
link |
00:24:34.920
He says, chimpanzees cannot do that.
link |
00:24:36.580
Yeah, right.
link |
00:24:37.420
Right?
link |
00:24:38.660
If a chimpanzee's in a situation where here's some
link |
00:24:42.140
that aren't from my tribe, bad things happen.
link |
00:24:46.660
Especially in a coffee shop,
link |
00:24:47.580
there's delicious food around, you know.
link |
00:24:48.940
Yeah, yeah.
link |
00:24:49.900
But we humans have figured that out, right?
link |
00:24:53.140
And you know.
link |
00:24:54.220
For the most part.
link |
00:24:55.040
For the most part.
link |
00:24:55.880
We still go to war, we still do terrible things
link |
00:24:58.180
but for the most part, we've learned to trust each other
link |
00:25:01.020
and live together.
link |
00:25:02.780
So that's gonna be important for our AI systems as well.
link |
00:25:08.420
And also I think a lot of the emphasis is on AI
link |
00:25:13.660
but in many cases, AI is part of the technology
link |
00:25:18.000
but isn't really the main thing.
link |
00:25:19.300
So a lot of what we've seen is more due
link |
00:25:22.820
to communications technology than AI technology.
link |
00:25:27.380
Yeah, you wanna make these good decisions
link |
00:25:30.120
but the reason we're able to have any kind of system at all
link |
00:25:33.900
is we've got the communication
link |
00:25:35.820
so that we're collecting the data
link |
00:25:37.560
and so that we can reach lots of people around the world.
link |
00:25:41.500
I think that's a bigger change that we're dealing with.
link |
00:25:45.060
Speaking of reaching a lot of people around the world,
link |
00:25:47.780
on the side of education,
link |
00:25:51.380
one of the many things in terms of education you've done,
link |
00:25:53.660
you've taught the Intro to Artificial Intelligence course
link |
00:25:56.980
that signed up 160,000 students.
link |
00:26:00.640
There's one of the first successful example
link |
00:26:02.300
of a MOOC, Massive Open Online Course.
link |
00:26:06.780
What did you learn from that experience?
link |
00:26:09.180
What do you think is the future of MOOCs,
link |
00:26:11.620
of education online?
link |
00:26:12.860
Yeah, it was a great fun doing it,
link |
00:26:15.340
particularly being right at the start
link |
00:26:19.940
just because it was exciting and new
link |
00:26:21.660
but it also meant that we had less competition, right?
link |
00:26:24.940
So one of the things you hear about,
link |
00:26:27.860
well, the problem with MOOCs is the completion rates
link |
00:26:31.180
are so low so there must be a failure
link |
00:26:33.820
and I gotta admit, I'm a prime contributor, right?
link |
00:26:37.580
I probably started 50 different courses
link |
00:26:40.780
that I haven't finished
link |
00:26:42.400
but I got exactly what I wanted out of them
link |
00:26:44.260
because I had never intended to finish them.
link |
00:26:46.100
I just wanted to dabble in a little bit
link |
00:26:48.680
either to see the topic matter
link |
00:26:50.300
or just to see the pedagogy of how are they doing this class.
link |
00:26:53.340
So I guess the main thing I learned is when I came in,
link |
00:26:58.060
I thought the challenge was information,
link |
00:27:03.140
saying if I'm just, take the stuff I want you to know
link |
00:27:07.460
and I'm very clear and explain it well,
link |
00:27:10.540
then my job is done and good things are gonna happen.
link |
00:27:14.580
And then in doing the course, I learned,
link |
00:27:17.300
well, yeah, you gotta have the information
link |
00:27:19.220
but really the motivation is the most important thing
link |
00:27:23.020
that if students don't stick with it,
link |
00:27:26.140
it doesn't matter how good the content is.
link |
00:27:29.500
And I think being one of the first classes,
link |
00:27:32.780
we were helped by sort of exterior motivation.
link |
00:27:36.780
So we tried to do a good job of making it enticing
link |
00:27:39.340
and setting up ways for the community
link |
00:27:44.460
to work with each other to make it more motivating
link |
00:27:46.980
but really a lot of it was, hey, this is a new thing
link |
00:27:49.500
and I'm really excited to be part of a new thing.
link |
00:27:51.580
And so the students brought their own motivation.
link |
00:27:54.580
And so I think this is great
link |
00:27:56.860
because there's lots of people around the world
link |
00:27:58.660
who have never had this before,
link |
00:28:03.620
would never have the opportunity to go to Stanford
link |
00:28:07.060
and take a class or go to MIT
link |
00:28:08.540
or go to one of the other schools
link |
00:28:10.460
but now we can bring that to them
link |
00:28:12.860
and if they bring their own motivation,
link |
00:28:15.820
they can be successful in a way they couldn't before.
link |
00:28:18.940
But that's really just the top tier of people
link |
00:28:21.580
that are ready to do that.
link |
00:28:22.780
The rest of the people just don't see
link |
00:28:26.980
or don't have the motivation
link |
00:28:29.500
and don't see how if they push through
link |
00:28:31.620
and were able to do it, what advantage that would get them.
link |
00:28:34.660
So I think we got a long way to go
link |
00:28:36.220
before we were able to do that.
link |
00:28:37.900
And I think some of it is based on technology
link |
00:28:40.940
but more of it's based on the idea of community.
link |
00:28:43.980
You gotta actually get people together.
link |
00:28:46.140
Some of the getting together can be done online.
link |
00:28:49.340
I think some of it really has to be done in person
link |
00:28:52.300
in order to build that type of community and trust.
link |
00:28:56.460
You know, there's an intentional mechanism
link |
00:28:59.500
that we've developed a short attention span,
link |
00:29:02.660
especially younger people
link |
00:29:04.500
because sort of shorter and shorter videos online,
link |
00:29:08.820
there's a whatever the way the brain is developing now
link |
00:29:13.700
and with people that have grown up with the internet,
link |
00:29:16.660
they have quite a short attention span.
link |
00:29:18.460
So, and I would say I had the same
link |
00:29:21.100
when I was growing up too, probably for different reasons.
link |
00:29:23.940
So I probably wouldn't have learned as much as I have
link |
00:29:28.100
if I wasn't forced to sit in a physical classroom,
link |
00:29:31.380
sort of bored, sometimes falling asleep,
link |
00:29:33.980
but sort of forcing myself through that process.
link |
00:29:36.660
So sometimes extremely difficult computer science courses.
link |
00:29:39.700
What's the difference in your view
link |
00:29:42.140
between in person education experience,
link |
00:29:46.340
which you, first of all, yourself had
link |
00:29:48.940
and you yourself taught and online education
link |
00:29:52.100
and how do we close that gap if it's even possible?
link |
00:29:54.340
Yeah, so I think there's two issues.
link |
00:29:56.380
One is whether it's in person or online.
link |
00:30:00.740
So it's sort of the physical location
link |
00:30:03.020
and then the other is kind of the affiliation, right?
link |
00:30:07.100
So you stuck with it in part
link |
00:30:10.900
because you were in the classroom
link |
00:30:12.540
and you saw everybody else was suffering
link |
00:30:15.380
the same way you were,
link |
00:30:17.420
but also because you were enrolled,
link |
00:30:20.140
you had paid tuition,
link |
00:30:22.180
sort of everybody was expecting you to stick with it.
link |
00:30:25.380
Society, parents, peers.
link |
00:30:29.420
And so those are two separate things.
link |
00:30:31.140
I mean, you could certainly imagine
link |
00:30:32.980
I pay a huge amount of tuition
link |
00:30:35.220
and everybody signed up and says, yes, you're doing this,
link |
00:30:38.180
but then I'm in my room
link |
00:30:40.740
and my classmates are in different rooms, right?
link |
00:30:43.220
We could have things set up that way.
link |
00:30:45.980
So it's not just the online versus offline.
link |
00:30:48.900
I think what's more important
link |
00:30:50.020
is the commitment that you've made.
link |
00:30:53.940
And certainly it is important
link |
00:30:56.100
to have that kind of informal,
link |
00:30:59.660
you know, I meet people outside of class,
link |
00:31:01.780
we talk together because we're all in it together.
link |
00:31:05.020
I think that's really important,
link |
00:31:07.580
both in keeping your motivation
link |
00:31:10.140
and also that's where
link |
00:31:11.260
some of the most important learning goes on.
link |
00:31:13.460
So you wanna have that.
link |
00:31:15.380
Maybe, you know, especially now
link |
00:31:17.460
we start getting into higher bandwidths
link |
00:31:19.780
and augmented reality and virtual reality,
link |
00:31:22.580
you might be able to get that
link |
00:31:23.620
without being in the same physical place.
link |
00:31:25.900
Do you think it's possible we'll see a course at Stanford,
link |
00:31:30.740
for example, that for students,
link |
00:31:33.940
enrolled students is only online in the near future
link |
00:31:37.380
or literally sort of it's part of the curriculum
link |
00:31:39.740
and there is no...
link |
00:31:41.180
Yeah, so you're starting to see that.
link |
00:31:42.700
I know Georgia Tech has a master's that's done that way.
link |
00:31:46.660
Oftentimes it's sort of,
link |
00:31:48.380
they're creeping in in terms of a master's program
link |
00:31:50.980
or sort of further education,
link |
00:31:54.300
considering the constraints of students and so on.
link |
00:31:56.620
But I mean, literally, is it possible that we,
link |
00:32:00.780
you know, Stanford, MIT, Berkeley,
link |
00:32:02.740
all these places go online only in the next few decades?
link |
00:32:07.820
Yeah, probably not,
link |
00:32:08.780
because, you know, they've got a big commitment
link |
00:32:11.300
to a physical campus.
link |
00:32:13.300
Sure, so there's a momentum
link |
00:32:16.500
that's both financial and culturally.
link |
00:32:18.300
Right, and then there are certain things
link |
00:32:21.180
that's just hard to do virtually, right?
link |
00:32:25.060
So, you know, we're in a field where,
link |
00:32:29.300
if you have your own computer and your own paper,
link |
00:32:32.660
and so on, you can do the work anywhere.
link |
00:32:36.740
But if you're in a biology lab or something,
link |
00:32:39.380
you know, you don't have all the right stuff at home.
link |
00:32:42.820
Right, so our field, programming,
link |
00:32:45.700
you've also done a lot of programming yourself.
link |
00:32:50.860
In 2001, you wrote a great article about programming
link |
00:32:54.260
called Teach Yourself Programming in 10 Years,
link |
00:32:57.260
sort of response to all the books
link |
00:32:59.300
that say teach yourself programming in 21 days.
link |
00:33:01.500
So if you were giving advice to someone
link |
00:33:02.940
getting into programming today,
link |
00:33:04.780
this is a few years since you've written that article,
link |
00:33:07.220
what's the best way to undertake that journey?
link |
00:33:10.820
I think there's lots of different ways,
link |
00:33:12.300
and I think programming means more things now.
link |
00:33:17.420
And I guess, you know, when I wrote that article,
link |
00:33:20.060
I was thinking more about
link |
00:33:23.180
becoming a professional software engineer,
link |
00:33:25.620
and I thought that's a, you know,
link |
00:33:27.660
sort of a career long field of study.
link |
00:33:31.500
But I think there's lots of things now
link |
00:33:33.340
that people can do where programming is a part
link |
00:33:37.580
of solving what they wanna solve
link |
00:33:40.980
without achieving that professional level status, right?
link |
00:33:44.860
So I'm not gonna be going
link |
00:33:45.780
and writing a million lines of code,
link |
00:33:47.620
but, you know, I'm a biologist or a physicist or something,
link |
00:33:51.620
or even a historian, and I've got some data,
link |
00:33:55.620
and I wanna ask a question of that data.
link |
00:33:58.420
And I think for that, you don't need 10 years, right?
link |
00:34:02.100
So there are many shortcuts
link |
00:34:04.220
to being able to answer those kinds of questions.
link |
00:34:08.460
And, you know, you see today a lot of emphasis
link |
00:34:11.860
on learning to code, teaching kids how to code.
link |
00:34:16.700
I think that's great,
link |
00:34:18.740
but I wish they would change the message a little bit,
link |
00:34:21.700
right, so I think code isn't the main thing.
link |
00:34:24.700
I don't really care if you know the syntax of JavaScript
link |
00:34:28.260
or if you can connect these blocks together
link |
00:34:31.500
in this visual language.
link |
00:34:33.420
But what I do care about is that you can analyze a problem,
link |
00:34:38.220
you can think of a solution, you can carry out,
link |
00:34:43.700
you know, make a model, run that model,
link |
00:34:46.620
test the model, see the results,
link |
00:34:50.980
verify that they're reasonable,
link |
00:34:53.660
ask questions and answer them, right?
link |
00:34:55.660
So it's more modeling and problem solving,
link |
00:34:58.540
and you use coding in order to do that,
link |
00:35:01.860
but it's not just learning coding for its own sake.
link |
00:35:04.300
That's really interesting.
link |
00:35:05.140
So it's actually almost, in many cases,
link |
00:35:08.140
it's learning to work with data,
link |
00:35:10.060
to extract something useful out of data.
link |
00:35:11.980
So when you say problem solving,
link |
00:35:13.660
you really mean taking some kind of,
link |
00:35:15.300
maybe collecting some kind of data set,
link |
00:35:17.700
cleaning it up, and saying something interesting about it,
link |
00:35:20.300
which is useful in all kinds of domains.
link |
00:35:23.020
And, you know, and I see myself being stuck sometimes
link |
00:35:28.100
in kind of the old ways, right?
link |
00:35:30.460
So, you know, I'll be working on a project,
link |
00:35:34.180
maybe with a younger employee, and we say,
link |
00:35:37.740
oh, well, here's this new package
link |
00:35:39.260
that could help solve this problem.
link |
00:35:42.300
And I'll go and I'll start reading the manuals,
link |
00:35:44.500
and, you know, I'll be two hours into reading the manuals,
link |
00:35:48.180
and then my colleague comes back and says, I'm done.
link |
00:35:51.100
You know, I downloaded the package, I installed it,
link |
00:35:53.820
I tried calling some things, the first one didn't work,
link |
00:35:56.500
the second one worked, now I'm done.
link |
00:35:58.740
And I say, but I have a hundred questions
link |
00:36:00.620
about how does this work and how does that work?
link |
00:36:02.100
And they say, who cares, right?
link |
00:36:04.140
I don't need to understand the whole thing.
link |
00:36:05.540
I answered my question, it's a big, complicated package,
link |
00:36:09.180
I don't understand the rest of it,
link |
00:36:10.540
but I got the right answer.
link |
00:36:12.180
And I'm just, it's hard for me to get into that mindset.
link |
00:36:15.900
I want to understand the whole thing.
link |
00:36:17.620
And, you know, if they wrote a manual,
link |
00:36:19.420
I should probably read it.
link |
00:36:21.380
And, but that's not necessarily the right way.
link |
00:36:23.660
I think I have to get used to dealing with more,
link |
00:36:28.580
being more comfortable with uncertainty
link |
00:36:30.500
and not knowing everything.
link |
00:36:32.060
Yeah, so I struggle with the same,
link |
00:36:33.620
instead of the spectrum between Donald and Don Knuth.
link |
00:36:37.300
Yeah.
link |
00:36:38.140
It's kind of the very, you know,
link |
00:36:39.620
before he can say anything about a problem,
link |
00:36:42.460
he really has to get down to the machine code assembly.
link |
00:36:45.420
Yeah.
link |
00:36:46.260
And that forces exactly what you said of several students
link |
00:36:50.220
in my group that, you know, 20 years old,
link |
00:36:53.460
and they can solve almost any problem within a few hours.
link |
00:36:56.820
That would take me probably weeks
link |
00:36:58.260
because I would try to, as you said, read the manual.
link |
00:37:00.980
So do you think the nature of mastery,
link |
00:37:04.380
you're mentioning biology,
link |
00:37:06.820
sort of outside disciplines, applying programming,
link |
00:37:11.300
but computer scientists.
link |
00:37:13.860
So over time, there's higher and higher levels
link |
00:37:16.420
of abstraction available now.
link |
00:37:18.340
So with this week, there's the TensorFlow Summit, right?
link |
00:37:23.700
So if you're not particularly into deep learning,
link |
00:37:27.500
but you're still a computer scientist,
link |
00:37:29.940
you can accomplish an incredible amount with TensorFlow
link |
00:37:33.180
without really knowing any fundamental internals
link |
00:37:35.940
of machine learning.
link |
00:37:37.460
Do you think the nature of mastery is changing,
link |
00:37:40.860
even for computer scientists,
link |
00:37:42.340
like what it means to be an expert programmer?
link |
00:37:45.660
Yeah, I think that's true.
link |
00:37:47.700
You know, we never really should have focused on programmer,
link |
00:37:51.180
right, because it's still, it's the skill,
link |
00:37:53.660
and what we really want to focus on is the result.
link |
00:37:56.540
So we built this ecosystem
link |
00:37:59.140
where the way you can get stuff done
link |
00:38:01.260
is by programming it yourself.
link |
00:38:04.100
At least when I started, you know,
link |
00:38:06.780
library functions meant you had square root,
link |
00:38:09.020
and that was about it, right?
link |
00:38:10.860
Everything else you built from scratch.
link |
00:38:13.060
And then we built up an ecosystem where a lot of times,
link |
00:38:16.140
well, you can download a lot of stuff
link |
00:38:17.460
that does a big part of what you need.
link |
00:38:20.220
And so now it's more a question of assembly
link |
00:38:23.740
rather than manufacturing.
link |
00:38:28.300
And that's a different way of looking at problems.
link |
00:38:32.220
From another perspective in terms of mastery
link |
00:38:34.260
and looking at programmers or people that reason
link |
00:38:37.660
about problems in a computational way.
link |
00:38:39.780
So Google, you know, from the hiring perspective,
link |
00:38:44.120
from the perspective of hiring
link |
00:38:45.140
or building a team of programmers,
link |
00:38:47.420
how do you determine if someone's a good programmer?
link |
00:38:50.280
Or if somebody, again, so I want to deviate from,
link |
00:38:53.500
I want to move away from the word programmer,
link |
00:38:55.400
but somebody who could solve problems
link |
00:38:57.720
of large scale data and so on.
link |
00:38:59.720
What's, how do you build a team like that
link |
00:39:02.740
through the interviewing process?
link |
00:39:03.980
Yeah, and I think as a company grows,
link |
00:39:08.860
you get more expansive in the types
link |
00:39:12.260
of people you're looking for, right?
link |
00:39:14.460
So I think, you know, in the early days,
link |
00:39:16.580
we'd interview people and the question we were trying
link |
00:39:19.380
to ask is how close are they to Jeff Dean?
link |
00:39:22.820
And most people were pretty far away,
link |
00:39:26.780
but we take the ones that were not that far away.
link |
00:39:29.380
And so we got kind of a homogeneous group
link |
00:39:31.760
of people who were really great programmers.
link |
00:39:34.560
Then as a company grows, you say,
link |
00:39:37.000
well, we don't want everybody to be the same,
link |
00:39:39.100
to have the same skill set.
link |
00:39:40.660
And so now we're hiring biologists in our health areas
link |
00:39:47.380
and we're hiring physicists,
link |
00:39:48.940
we're hiring mechanical engineers,
link |
00:39:51.180
we're hiring, you know, social scientists and ethnographers
link |
00:39:56.080
and people with different backgrounds
link |
00:39:59.140
who bring different skills.
link |
00:40:01.740
So you have mentioned that you still may partake
link |
00:40:06.260
in code reviews, given that you have a wealth of experience,
link |
00:40:10.720
as you've also mentioned.
link |
00:40:13.900
What errors do you often see and tend to highlight
link |
00:40:16.660
in the code of junior developers of people coming up now,
link |
00:40:20.020
given your background from Blisp
link |
00:40:23.460
to a couple of decades of programming?
link |
00:40:26.020
Yeah, that's a great question.
link |
00:40:28.420
You know, sometimes I try to look at the flexibility
link |
00:40:31.920
of the design of, yes, you know, this API solves this problem,
link |
00:40:37.560
but where is it gonna go in the future?
link |
00:40:39.900
Who else is gonna wanna call this?
link |
00:40:41.940
And, you know, are you making it easier for them to do that?
link |
00:40:46.940
That's a matter of design, is it documentation,
link |
00:40:50.640
is it sort of an amorphous thing
link |
00:40:53.880
you can't really put into words?
link |
00:40:55.140
It's just how it feels.
link |
00:40:56.660
If you put yourself in the shoes of a developer,
link |
00:40:58.340
would you use this kind of thing?
link |
00:40:59.540
I think it is how you feel, right?
link |
00:41:01.500
And so yeah, documentation is good,
link |
00:41:03.900
but it's more a design question, right?
link |
00:41:06.460
If you get the design right,
link |
00:41:07.620
then people will figure it out,
link |
00:41:10.220
whether the documentation is good or not.
link |
00:41:12.140
And if the design's wrong, then it'd be harder to use.
link |
00:41:16.180
How have you yourself changed as a programmer over the years?
link |
00:41:22.900
In a way, you already started to say sort of,
link |
00:41:26.660
you want to read the manual,
link |
00:41:28.100
you want to understand the core of the syntax
link |
00:41:30.860
to how the language is supposed to be used and so on.
link |
00:41:33.780
But what's the evolution been like
link |
00:41:36.540
from the 80s, 90s to today?
link |
00:41:40.700
I guess one thing is you don't have to worry
link |
00:41:42.820
about the small details of efficiency
link |
00:41:46.340
as much as you used to, right?
link |
00:41:48.060
So like I remember I did my list book in the 90s,
link |
00:41:53.380
and one of the things I wanted to do was say,
link |
00:41:56.300
here's how you do an object system.
link |
00:41:58.900
And basically, we're going to make it
link |
00:42:01.540
so each object is a hash table,
link |
00:42:03.620
and you look up the methods, and here's how it works.
link |
00:42:05.580
And then I said, of course,
link |
00:42:07.380
the real Common Lisp object system is much more complicated.
link |
00:42:12.220
It's got all these efficiency type issues,
link |
00:42:15.200
and this is just a toy,
link |
00:42:16.620
and nobody would do this in real life.
link |
00:42:18.980
And it turns out Python pretty much did exactly
link |
00:42:22.740
what I said and said objects are just dictionaries.
link |
00:42:27.500
And yeah, they have a few little tricks as well.
link |
00:42:30.140
But mostly, the thing that would have been
link |
00:42:34.260
100 times too slow in the 80s
link |
00:42:36.660
is now plenty fast for most everything.
link |
00:42:39.200
So you had to, as a programmer,
link |
00:42:40.700
let go of perhaps an obsession
link |
00:42:44.520
that I remember coming up with
link |
00:42:45.920
of trying to write efficient code.
link |
00:42:48.380
Yeah, to say what really matters
link |
00:42:51.340
is the total time it takes to get the project done.
link |
00:42:56.140
And most of that's gonna be the programmer time.
link |
00:42:59.100
So if you're a little bit less efficient,
link |
00:43:00.700
but it makes it easier to understand and modify,
link |
00:43:04.260
then that's the right trade off.
link |
00:43:05.920
So you've written quite a bit about Lisp.
link |
00:43:07.700
Your book on programming is in Lisp.
link |
00:43:10.180
You have a lot of code out there that's in Lisp.
link |
00:43:12.920
So myself and people who don't know what Lisp is
link |
00:43:16.980
should look it up.
link |
00:43:18.060
It's my favorite language for many AI researchers.
link |
00:43:20.820
It is a favorite language.
link |
00:43:22.460
The favorite language they never use these days.
link |
00:43:25.540
So what part of Lisp do you find most beautiful and powerful?
link |
00:43:28.980
So I think the beautiful part is the simplicity
link |
00:43:31.700
that in half a page, you can define the whole language.
link |
00:43:36.340
And other languages don't have that.
link |
00:43:38.460
So you feel like you can hold everything in your head.
link |
00:43:42.780
And then a lot of people say,
link |
00:43:46.980
well, then that's too simple.
link |
00:43:48.740
Here's all these things I wanna do.
link |
00:43:50.420
And my Java or Python or whatever
link |
00:43:54.500
has 100 or 200 or 300 different syntax rules
link |
00:43:58.740
and don't I need all those?
link |
00:44:00.360
And Lisp's answer was, no, we're only gonna give you
link |
00:44:03.860
eight or so syntax rules,
link |
00:44:06.020
but we're gonna allow you to define your own.
link |
00:44:09.020
And so that was a very powerful idea.
link |
00:44:11.340
And I think this idea of saying,
link |
00:44:15.880
I can start with my problem and with my data,
link |
00:44:20.300
and then I can build the language I want for that problem
link |
00:44:24.420
and for that data.
link |
00:44:25.940
And then I can make Lisp define that language.
link |
00:44:28.440
So you're sort of mixing levels and saying,
link |
00:44:32.660
I'm simultaneously a programmer in a language
link |
00:44:36.120
and a language designer.
link |
00:44:38.620
And that allows a better match between your problem
link |
00:44:41.900
and your eventual code.
link |
00:44:43.700
And I think Lisp had done that better than other languages.
link |
00:44:47.500
Yeah, it's a very elegant implementation
link |
00:44:49.460
of functional programming.
link |
00:44:51.300
But why do you think Lisp has not had the mass adoption
link |
00:44:55.220
and success of languages like Python?
link |
00:44:57.260
Is it the parentheses?
link |
00:44:59.300
Is it all the parentheses?
link |
00:45:02.020
Yeah, so I think a couple things.
link |
00:45:05.340
So one was, I think it was designed for a single programmer
link |
00:45:10.220
or a small team and a skilled programmer
link |
00:45:14.940
who had the good taste to say,
link |
00:45:17.140
well, I am doing language design
link |
00:45:19.600
and I have to make good choices.
link |
00:45:21.780
And if you make good choices, that's great.
link |
00:45:23.840
If you make bad choices, you can hurt yourself
link |
00:45:28.100
and it can be hard for other people on the team
link |
00:45:30.300
to understand it.
link |
00:45:31.140
So I think there was a limit to the scale
link |
00:45:34.300
of the size of a project in terms of number of people
link |
00:45:37.020
that Lisp was good for.
link |
00:45:38.580
And as an industry, we kind of grew beyond that.
link |
00:45:43.180
I think it is in part the parentheses.
link |
00:45:46.000
You know, one of the jokes is the acronym for Lisp
link |
00:45:49.640
is lots of irritating, silly parentheses.
link |
00:45:53.960
My acronym was Lisp is syntactically pure,
link |
00:45:58.360
saying all you need is parentheses and atoms.
link |
00:46:01.440
But I remember, you know, as we had the AI textbook
link |
00:46:05.200
and because we did it in the nineties,
link |
00:46:08.660
we had pseudocode in the book,
link |
00:46:11.380
but then we said, well, we'll have Lisp online
link |
00:46:13.360
because that's the language of AI at the time.
link |
00:46:16.200
And I remember some of the students complaining
link |
00:46:18.280
because they hadn't had Lisp before
link |
00:46:20.020
and they didn't quite understand what was going on.
link |
00:46:22.080
And I remember one student complained,
link |
00:46:24.820
I don't understand how this pseudocode
link |
00:46:26.600
corresponds to this Lisp.
link |
00:46:29.160
And there was a one to one correspondence
link |
00:46:31.480
between the symbols in the code and the pseudocode.
link |
00:46:35.760
And the only thing difference was the parentheses.
link |
00:46:39.160
So I said, it must be that for some people,
link |
00:46:41.240
a certain number of left parentheses shuts off their brain.
link |
00:46:45.040
Yeah, it's very possible in that sense
link |
00:46:47.160
and Python just goes the other way.
link |
00:46:49.520
So that was the point at which I said,
link |
00:46:51.100
okay, can't have only Lisp as a language.
link |
00:46:54.300
Cause I don't wanna, you know,
link |
00:46:56.640
you only got 10 or 12 or 15 weeks or whatever it is
link |
00:46:59.160
to teach AI and I don't want to waste two weeks
link |
00:47:01.400
of that teaching Lisp.
link |
00:47:03.000
So I say, I gotta have another language.
link |
00:47:04.440
Java was the most popular language at the time.
link |
00:47:06.920
I started doing that.
link |
00:47:08.240
And then I said, it's really hard to have a one to one
link |
00:47:12.080
correspondence between the pseudocode and the Java
link |
00:47:14.480
because Java is so verbose.
link |
00:47:16.980
So then I said, I'm gonna do a survey
link |
00:47:18.920
and find the language that's most like my pseudocode.
link |
00:47:22.920
And it turned out Python basically was my pseudocode.
link |
00:47:26.240
Somehow I had channeled Guido,
link |
00:47:30.360
designed a pseudocode that was the same as Python,
link |
00:47:32.680
although I hadn't heard of Python at that point.
link |
00:47:36.160
And from then on, that's what I've been using
link |
00:47:38.320
cause it's been a good match.
link |
00:47:41.220
So what's the story in Python behind PyTudes?
link |
00:47:45.680
Your GitHub repository with puzzles and exercises
link |
00:47:48.360
in Python is pretty fun.
link |
00:47:49.760
Yeah, just it, it seems like fun, you know,
link |
00:47:53.160
I like doing puzzles and I like being an educator.
link |
00:47:57.480
I did a class with Udacity, Udacity 212, I think it was.
link |
00:48:02.200
It was basically problem solving using Python
link |
00:48:07.320
and looking at different problems.
link |
00:48:08.960
Does PyTudes feed that class in terms of the exercises?
link |
00:48:11.920
I was wondering what the...
link |
00:48:12.760
Yeah, so the class came first.
link |
00:48:15.040
Some of the stuff that's in PyTudes was write ups
link |
00:48:17.640
of what was in the class and then some of it
link |
00:48:19.240
was just continuing to work on new problems.
link |
00:48:24.240
So what's the organizing madness of PyTudes?
link |
00:48:26.840
Is it just a collection of cool exercises?
link |
00:48:30.080
Just whatever I thought was fun.
link |
00:48:31.320
Okay, awesome.
link |
00:48:32.800
So you were the director of search quality at Google
link |
00:48:35.880
from 2001 to 2005 in the early days
link |
00:48:40.560
when there's just a few employees
link |
00:48:41.840
and when the company was growing like crazy, right?
link |
00:48:46.400
So, I mean, Google revolutionized the way we discover,
link |
00:48:52.040
share and aggregate knowledge.
link |
00:48:55.360
So just, this is one of the fundamental aspects
link |
00:49:00.280
of civilization, right, is information being shared
link |
00:49:03.160
and there's different mechanisms throughout history
link |
00:49:04.920
but Google has just 10x improved that, right?
link |
00:49:08.360
And you're a part of that, right?
link |
00:49:10.240
People discovering that information.
link |
00:49:11.880
So what were some of the challenges on a philosophical
link |
00:49:15.240
or the technical level in those early days?
link |
00:49:18.360
It definitely was an exciting time
link |
00:49:20.080
and as you say, we were doubling in size every year
link |
00:49:24.560
and the challenges were we wanted
link |
00:49:26.920
to get the right answers, right?
link |
00:49:29.040
And we had to figure out what that meant.
link |
00:49:32.520
We had to implement that and we had to make it all efficient
link |
00:49:36.360
and we had to keep on testing
link |
00:49:41.600
and seeing if we were delivering good answers.
link |
00:49:44.120
And now when you say good answers,
link |
00:49:45.640
it means whatever people are typing in
link |
00:49:47.760
in terms of keywords, in terms of that kind of thing
link |
00:49:50.320
that the results they get are ordered
link |
00:49:53.640
by the desirability for them of those results.
link |
00:49:56.520
Like they're like, the first thing they click on
link |
00:49:58.560
will likely be the thing that they were actually looking for.
link |
00:50:01.520
Right, one of the metrics we had
link |
00:50:03.160
was focused on the first thing.
link |
00:50:05.040
Some of it was focused on the whole page.
link |
00:50:07.560
Some of it was focused on top three or so.
link |
00:50:11.800
So we looked at a lot of different metrics
link |
00:50:13.440
for how well we were doing
link |
00:50:15.720
and we broke it down into subclasses of,
link |
00:50:19.280
maybe here's a type of query that we're not doing well on
link |
00:50:23.520
and we try to fix that.
link |
00:50:25.520
Early on we started to realize that we were in an adversarial
link |
00:50:29.400
position, right, so we started thinking,
link |
00:50:32.760
well, we're kind of like the card catalog in the library,
link |
00:50:35.960
right, so the books are here and we're off to the side
link |
00:50:39.480
and we're just reflecting what's there.
link |
00:50:42.640
And then we realized every time we make a change,
link |
00:50:45.600
the webmasters make a change and it's game theoretic.
link |
00:50:50.040
And so we had to think not only of is this the right move
link |
00:50:54.440
for us to make now, but also if we make this move,
link |
00:50:57.760
what's the counter move gonna be?
link |
00:50:59.800
Is that gonna get us into a worse place,
link |
00:51:02.240
in which case we won't make that move,
link |
00:51:03.720
we'll make a different move.
link |
00:51:05.520
And did you find, I mean, I assume with the popularity
link |
00:51:08.160
and the growth of the internet
link |
00:51:09.440
that people were creating new content,
link |
00:51:11.520
so you're almost helping guide the creation of new content.
link |
00:51:14.240
Yeah, so that's certainly true, right,
link |
00:51:15.800
so we definitely changed the structure of the network.
link |
00:51:20.800
So if you think back in the very early days,
link |
00:51:24.520
Larry and Sergey had the PageRank paper
link |
00:51:28.320
and John Kleinberg had this hubs and authorities model,
link |
00:51:33.240
which says the web is made out of these hubs,
link |
00:51:38.480
which will be my page of cool links about dogs or whatever,
link |
00:51:44.480
and people would just list links.
link |
00:51:46.880
And then there'd be authorities,
link |
00:51:47.960
which were the page about dogs that most people linked to.
link |
00:51:53.080
That doesn't happen anymore.
link |
00:51:54.240
People don't bother to say my page of cool links,
link |
00:51:57.800
because we took over that function, right,
link |
00:52:00.080
so we changed the way that worked.
link |
00:52:03.360
Did you imagine back then that the internet
link |
00:52:05.680
would be as massively vibrant as it is today?
link |
00:52:08.840
I mean, it was already growing quickly,
link |
00:52:10.320
but it's just another, I don't know if you've ever,
link |
00:52:14.800
today, if you sit back and just look at the internet
link |
00:52:18.000
with wonder the amount of content
link |
00:52:20.520
that's just constantly being created,
link |
00:52:22.000
constantly being shared and deployed.
link |
00:52:24.200
Yeah, it's always been surprising to me.
link |
00:52:27.400
I guess I'm not very good at predicting the future.
link |
00:52:31.200
And I remember being a graduate student in 1980 or so,
link |
00:52:35.720
and we had the ARPANET,
link |
00:52:39.480
and then there was this proposal to commercialize it,
link |
00:52:44.480
and have this internet, and this crazy Senator Gore
link |
00:52:49.520
thought that might be a good idea.
link |
00:52:51.280
And I remember thinking, oh, come on,
link |
00:52:53.040
you can't expect a commercial company
link |
00:52:55.840
to understand this technology.
link |
00:52:58.360
They'll never be able to do it.
link |
00:52:59.360
Yeah, okay, we can have this.com domain,
link |
00:53:01.560
but it won't go anywhere.
link |
00:53:03.360
So I was wrong, Al Gore was right.
link |
00:53:05.560
At the same time, the nature of what it means
link |
00:53:07.920
to be a commercial company has changed, too.
link |
00:53:09.880
So Google, in many ways, at its founding
link |
00:53:12.720
is different than what companies were before, I think.
link |
00:53:16.840
Right, so there's all these business models
link |
00:53:19.760
that are so different than what was possible back then.
link |
00:53:23.080
So in terms of predicting the future,
link |
00:53:25.000
what do you think it takes to build a system
link |
00:53:27.280
that approaches human level intelligence?
link |
00:53:29.960
You've talked about, of course,
link |
00:53:31.780
that we shouldn't be so obsessed
link |
00:53:34.160
about creating human level intelligence.
link |
00:53:36.360
We just create systems that are very useful for humans.
link |
00:53:39.320
But what do you think it takes
link |
00:53:40.800
to approach that level?
link |
00:53:44.960
Right, so certainly I don't think
link |
00:53:47.400
human level intelligence is one thing, right?
link |
00:53:49.880
So I think there's lots of different tasks,
link |
00:53:51.680
lots of different capabilities.
link |
00:53:54.080
I also don't think that should be the goal, right?
link |
00:53:56.760
So I wouldn't wanna create a calculator
link |
00:54:01.640
that could do multiplication at human level, right?
link |
00:54:04.320
That would be a step backwards.
link |
00:54:06.020
And so for many things,
link |
00:54:07.520
we should be aiming far beyond human level
link |
00:54:09.600
for other things.
link |
00:54:12.280
Maybe human level is a good level to aim at.
link |
00:54:15.320
And for others, we'd say,
link |
00:54:16.900
well, let's not bother doing this
link |
00:54:18.080
because we already have humans can take on those tasks.
link |
00:54:21.980
So as you say, I like to focus on what's a useful tool.
link |
00:54:26.380
And in some cases, being at human level
link |
00:54:30.480
is an important part of crossing that threshold
link |
00:54:32.880
to make the tool useful.
link |
00:54:34.560
So we see in things like these personal assistants now
link |
00:54:39.400
that you get either on your phone
link |
00:54:41.080
or on a speaker that sits on the table,
link |
00:54:44.600
you wanna be able to have a conversation with those.
link |
00:54:47.440
And I think as an industry,
link |
00:54:49.880
we haven't quite figured out what the right model is
link |
00:54:51.880
for what these things can do.
link |
00:54:55.040
And we're aiming towards,
link |
00:54:56.280
well, you just have a conversation with them
link |
00:54:57.960
the way you can with a person.
link |
00:55:00.280
But we haven't delivered on that model yet, right?
link |
00:55:02.960
So you can ask it, what's the weather?
link |
00:55:04.960
You can ask it, play some nice songs.
link |
00:55:08.380
And five or six other things,
link |
00:55:11.660
and then you run out of stuff that it can do.
link |
00:55:14.020
In terms of a deep, meaningful connection.
link |
00:55:16.380
So you've mentioned the movie Her
link |
00:55:18.020
as one of your favorite AI movies.
link |
00:55:20.260
Do you think it's possible for a human being
link |
00:55:22.020
to fall in love with an AI assistant, as you mentioned?
link |
00:55:25.760
So taking this big leap from what's the weather
link |
00:55:28.900
to having a deep connection.
link |
00:55:31.300
Yeah, I think as people, that's what we love to do.
link |
00:55:35.900
And I was at a showing of Her
link |
00:55:39.420
where we had a panel discussion and somebody asked me,
link |
00:55:43.580
what other movie do you think Her is similar to?
link |
00:55:46.940
And my answer was Life of Brian,
link |
00:55:50.340
which is not a science fiction movie,
link |
00:55:53.580
but both movies are about wanting to believe
link |
00:55:57.260
in something that's not necessarily real.
link |
00:56:00.660
Yeah, by the way, for people that don't know,
link |
00:56:01.860
it's Monty Python.
link |
00:56:03.000
Yeah, it's been brilliantly put.
link |
00:56:05.100
Right, so I think that's just the way we are.
link |
00:56:07.580
We want to trust, we want to believe,
link |
00:56:11.060
we want to fall in love,
link |
00:56:12.500
and it doesn't necessarily take that much, right?
link |
00:56:15.980
So my kids fell in love with their teddy bear,
link |
00:56:20.760
and the teddy bear was not very interactive.
link |
00:56:23.400
So that's all us pushing our feelings
link |
00:56:26.820
onto our devices and our things,
link |
00:56:29.700
and I think that that's what we like to do,
link |
00:56:31.900
so we'll continue to do that.
link |
00:56:33.340
So yeah, as human beings, we long for that connection,
link |
00:56:36.260
and just AI has to do a little bit of work
link |
00:56:39.620
to catch us in the other end.
link |
00:56:41.900
Yeah, and certainly, if you can get to dog level,
link |
00:56:46.180
a lot of people have invested a lot of love in their pets.
link |
00:56:49.500
In their pets.
link |
00:56:50.340
Some people, as I've been told,
link |
00:56:52.980
in working with autonomous vehicles,
link |
00:56:54.460
have invested a lot of love into their inanimate cars,
link |
00:56:58.300
so it really doesn't take much.
link |
00:57:00.920
So what is a good test to linger on a topic
link |
00:57:05.260
that may be silly or a little bit philosophical?
link |
00:57:07.900
What is a good test of intelligence in your view?
link |
00:57:12.220
Is natural conversation like in the Turing test
link |
00:57:14.460
a good test?
link |
00:57:16.500
Put another way, what would impress you
link |
00:57:20.000
if you saw a computer do it these days?
link |
00:57:22.740
Yeah, I mean, I get impressed all the time.
link |
00:57:24.460
Go playing, StarCraft playing, those are all pretty cool.
link |
00:57:35.220
And I think, sure, conversation is important.
link |
00:57:39.820
I think we sometimes have these tests
link |
00:57:44.780
where it's easy to fool the system, where
link |
00:57:46.980
you can have a chat bot that can have a conversation,
link |
00:57:51.340
but it never gets into a situation
link |
00:57:54.500
where it has to be deep enough that it really reveals itself
link |
00:57:58.660
as being intelligent or not.
link |
00:58:00.940
I think Turing suggested that, but I think if he were alive,
link |
00:58:07.620
he'd say, you know, I didn't really mean that seriously.
link |
00:58:11.580
And I think, this is just my opinion,
link |
00:58:15.100
but I think Turing's point was not
link |
00:58:17.820
that this test of conversation is a good test.
link |
00:58:21.460
I think his point was having a test is the right thing.
link |
00:58:25.340
So rather than having the philosophers say, oh, no,
link |
00:58:28.620
AI is impossible, you should say, well,
link |
00:58:31.180
we'll just have a test, and then the result of that
link |
00:58:33.420
will tell us the answer.
link |
00:58:34.620
And it doesn't necessarily have to be a conversation test.
link |
00:58:37.220
That's right.
link |
00:58:37.740
And coming up a new, better test as the technology evolves
link |
00:58:40.220
is probably the right way.
link |
00:58:42.140
Do you worry, as a lot of the general public does about,
link |
00:58:46.580
not a lot, but some vocal part of the general public
link |
00:58:51.020
about the existential threat of artificial intelligence?
link |
00:58:53.580
So looking farther into the future, as you said,
link |
00:58:56.940
most of us are not able to predict much.
link |
00:58:59.020
So when shrouded in such mystery, there's a concern of,
link |
00:59:02.460
well, you start thinking about worst case.
link |
00:59:05.020
Is that something that occupies your mind, space, much?
link |
00:59:09.060
So I certainly think about threats.
link |
00:59:11.420
I think about dangers.
link |
00:59:13.860
And I think any new technology has positives and negatives.
link |
00:59:19.820
And if it's a powerful technology,
link |
00:59:21.460
it can be used for bad as well as for good.
link |
00:59:24.700
So I'm certainly not worried about the robot
link |
00:59:27.820
apocalypse and the Terminator type scenarios.
link |
00:59:32.540
I am worried about change in employment.
link |
00:59:37.620
And are we going to be able to react fast enough
link |
00:59:41.020
to deal with that?
link |
00:59:41.900
I think we're already seeing it today, where
link |
00:59:44.380
a lot of people are disgruntled about the way
link |
00:59:48.420
income inequality is working.
link |
00:59:50.180
And automation could help accelerate
link |
00:59:53.300
those kinds of problems.
link |
00:59:55.500
I see powerful technologies can always be used as weapons,
link |
00:59:59.980
whether they're robots or drones or whatever.
link |
01:00:03.380
Some of that we're seeing due to AI.
link |
01:00:06.180
A lot of it, you don't need AI.
link |
01:00:09.420
And I don't know what's a worst threat,
link |
01:00:12.500
if it's an autonomous drone or it's CRISPR technology
link |
01:00:17.660
becoming available.
link |
01:00:18.860
Or we have lots of threats to face.
link |
01:00:21.340
And some of them involve AI, and some of them don't.
link |
01:00:24.660
So the threats that technology presents,
link |
01:00:27.220
are you, for the most part, optimistic about technology
link |
01:00:31.020
also alleviating those threats or creating new opportunities
link |
01:00:34.340
or protecting us from the more detrimental effects
link |
01:00:38.300
of these new technologies?
link |
01:00:38.820
I don't know.
link |
01:00:39.780
Again, it's hard to predict the future.
link |
01:00:41.420
And as a society so far, we've survived
link |
01:00:47.580
nuclear bombs and other things.
link |
01:00:50.780
Of course, only societies that have survived
link |
01:00:53.660
are having this conversation.
link |
01:00:54.780
So maybe that's survivorship bias there.
link |
01:00:59.260
What problem stands out to you as exciting, challenging,
link |
01:01:02.780
impactful to work on in the near future for yourself,
link |
01:01:06.540
for the community, and broadly?
link |
01:01:09.340
So we talked about these assistance and conversation.
link |
01:01:13.060
I think that's a great area.
link |
01:01:14.980
I think combining common sense reasoning
link |
01:01:20.980
with the power of data is a great area.
link |
01:01:26.420
In which application?
link |
01:01:27.300
In conversation, or just broadly speaking?
link |
01:01:29.340
Just in general, yeah.
link |
01:01:31.300
As a programmer, I'm interested in programming tools,
link |
01:01:35.500
both in terms of the current systems
link |
01:01:38.980
we have today with TensorFlow and so on.
link |
01:01:41.660
Can we make them much easier to use
link |
01:01:43.460
for a broader class of people?
link |
01:01:45.980
And also, can we apply machine learning
link |
01:01:49.340
to the more traditional type of programming?
link |
01:01:52.380
So when you go to Google and you type in a query
link |
01:01:57.460
and you spell something wrong, it says, did you mean?
link |
01:02:00.300
And the reason we're able to do that
link |
01:02:01.900
is because lots of other people made a similar error,
link |
01:02:04.460
and then they corrected it.
link |
01:02:06.540
We should be able to go into our code bases and our bug fix
link |
01:02:10.140
bases.
link |
01:02:10.820
And when I type a line of code, it should be able to say,
link |
01:02:13.940
did you mean such and such?
link |
01:02:15.180
If you type this today, you're probably going to type
link |
01:02:17.780
in this bug fix tomorrow.
link |
01:02:20.540
Yeah, that's a really exciting application
link |
01:02:22.620
of almost an assistant for the coding programming experience
link |
01:02:27.660
at every level.
link |
01:02:29.420
So I think I could safely speak for the entire AI community,
link |
01:02:35.260
first of all, for thanking you for the amazing work you've
link |
01:02:37.900
done, certainly for the amazing work you've done
link |
01:02:40.620
with AI and Modern Approach book.
link |
01:02:43.380
I think we're all looking forward very much
link |
01:02:45.260
for the fourth edition, and then the fifth edition, and so on.
link |
01:02:48.500
So Peter, thank you so much for talking today.
link |
01:02:51.380
Yeah, thank you.
link |
01:02:51.980
My pleasure.