back to index

Michael I. Jordan: Machine Learning, Recommender Systems, and Future of AI | Lex Fridman Podcast #74


small model | large model

link |
00:00:00.000
The following is a conversation with Michael I. Jordan, a professor at Berkeley and one
link |
00:00:05.620
of the most influential people in the history of machine learning, statistics, and artificial
link |
00:00:10.280
intelligence.
link |
00:00:11.280
He has been cited over 170,000 times and he has mentored many of the world class researchers
link |
00:00:17.640
defining the field of AI today, including Andrew Ng, Zubin Garamani, Ben Taskar, and
link |
00:00:25.480
Yoshua Bengio.
link |
00:00:27.640
All this, to me, is as impressive as the over 32,000 points in the six NBA championships
link |
00:00:34.560
of the Michael J. Jordan of basketball fame.
link |
00:00:38.120
There's a nonzero probability that I talked to the other Michael Jordan given my connection
link |
00:00:43.200
to and love of the Chicago Bulls of the 90s, but if I had to pick one, I'm going with
link |
00:00:48.480
the Michael Jordan of statistics and computer science, or as Yann LeCun calls him, the Miles
link |
00:00:54.160
Davis of machine learning.
link |
00:00:56.080
In his blog post titled Artificial Intelligence, the Revolution Hasn't Happened Yet, Michael
link |
00:01:01.560
argues for broadening the scope of the artificial intelligence field.
link |
00:01:05.560
In many ways, the underlying spirit of this podcast is the same, to see artificial intelligence
link |
00:01:12.080
as a deeply human endeavor, to not only engineer algorithms and robots, but to understand and
link |
00:01:18.660
empower human beings at all levels of abstraction, from the individual to our civilization as
link |
00:01:25.120
a whole.
link |
00:01:26.800
This is the Artificial Intelligence Podcast.
link |
00:01:29.480
If you enjoy it, subscribe and YouTube, give it five stars at Apple Podcast, support it
link |
00:01:34.080
on Patreon, or simply connect with me on Twitter at Lex Friedman spelled F R I D M A N.
link |
00:01:42.160
As usual, I'll do one or two minutes of ads now and never any ads in the middle that
link |
00:01:46.560
can break the flow of the conversation.
link |
00:01:48.640
I hope that works for you and doesn't hurt the listening experience.
link |
00:01:54.000
This show is presented by Cash App, the number one finance app in the App Store.
link |
00:01:58.360
When you get it, use code LEX PODCAST.
link |
00:02:02.080
Cash App lets you send money to friends, buy Bitcoin, and invest in the stock market with
link |
00:02:06.500
as little as $1.
link |
00:02:08.760
Since Cash App does fractional share trading, let me mention that the order execution algorithm
link |
00:02:13.500
that worked behind the scenes to create the abstraction of the fractional orders is to
link |
00:02:18.480
me an algorithmic marvel.
link |
00:02:21.200
Great props for the Cash App engineers for solving a hard problem that in the end provides
link |
00:02:26.400
an easy interface that takes a step up to the next layer of abstraction over the stock
link |
00:02:31.140
market, making trading more accessible for new investors and diversification much easier.
link |
00:02:38.440
So once again, if you get Cash App from the App Store or Google Play and use the code
link |
00:02:42.760
LEX PODCAST, you'll get $10 and Cash App will also donate $10 to First, one of my favorite
link |
00:02:49.280
organizations that is helping to advance robotics and STEM education for young people around
link |
00:02:55.120
the world.
link |
00:02:57.120
And now, here's my conversation with Michael I. Jordan.
link |
00:03:02.760
Given that you're one of the greats in the field of AI, machine learning, computer science,
link |
00:03:06.480
and so on, you're trivially called the Michael Jordan of machine learning, although as you
link |
00:03:14.000
know, you were born first, so technically MJ is the Michael I. Jordan of basketball.
link |
00:03:19.680
But anyway, my favorite is Yann LeCun calling you the Miles Davis of machine learning because
link |
00:03:25.520
as he says, you reinvent yourself periodically and sometimes leave fans scratching their
link |
00:03:30.600
heads after you change direction.
link |
00:03:32.460
So can you put at first your historian hat on and give a history of computer science
link |
00:03:38.840
and AI as you saw it, as you experienced it, including the four generations of AI successes
link |
00:03:46.040
that I've seen you talk about?
link |
00:03:47.800
Sure.
link |
00:03:48.800
Yeah, first of all, I much prefer Yann's metaphor.
link |
00:03:54.040
Miles Davis was a real explorer in jazz and he had a coherent story.
link |
00:04:00.020
So I think I have one, but it's not just the one you lived, it's the one you think about
link |
00:04:04.320
later.
link |
00:04:05.320
What the historian does is they look back and they revisit.
link |
00:04:09.920
I think what's happening right now is not AI, that was an intellectual aspiration that's
link |
00:04:16.520
still alive today as an aspiration.
link |
00:04:18.640
But I think this is akin to the development of chemical engineering from chemistry or
link |
00:04:22.480
electrical engineering from electromagnetism.
link |
00:04:25.900
So if you go back to the 30s or 40s, there wasn't yet chemical engineering.
link |
00:04:31.040
There was chemistry, there was fluid flow, there was mechanics and so on.
link |
00:04:35.600
But people pretty clearly viewed interesting goals to try to build factories that make
link |
00:04:41.280
chemicals products and do it viably, safely, make good ones, do it at scale.
link |
00:04:48.060
So people started to try to do that, of course, and some factories worked, some didn't, some
link |
00:04:52.160
were not viable, some exploded, but in parallel, developed a whole field called chemical engineering.
link |
00:04:58.200
Electrical engineering is a field, it's no bones about it, it has theoretical aspects
link |
00:05:02.040
to it, it has practical aspects.
link |
00:05:04.720
It's not just engineering, quote unquote, it's the real thing, real concepts are needed.
link |
00:05:09.640
Same thing with electrical engineering.
link |
00:05:11.680
There was Maxwell's equations, which in some sense were everything you know about electromagnetism,
link |
00:05:16.620
but you needed to figure out how to build circuits, how to build modules, how to put
link |
00:05:19.120
them together, how to bring electricity from one point to another safely and so on and
link |
00:05:22.920
so forth.
link |
00:05:23.920
So a whole field that developed called electrical engineering.
link |
00:05:26.080
I think that's what's happening right now, is that we have a proto field, which is statistics,
link |
00:05:33.320
more of the theoretical side of it, algorithmic side of computer science, that was enough
link |
00:05:36.560
to start to build things, but what things?
link |
00:05:39.240
Systems that bring value to human beings and use human data and mix in human decisions.
link |
00:05:44.120
The engineering side of that is all ad hoc.
link |
00:05:47.620
That's what's emerging.
link |
00:05:48.620
In fact, if you wanna call machine learning a field, I think that's what it is, that it's
link |
00:05:51.560
a proto form of engineering based on statistical and computational ideas of previous generations.
link |
00:05:56.600
But do you think there's something deeper about AI in his dreams and aspirations as
link |
00:06:01.280
compared to chemical engineering and electrical engineering?
link |
00:06:03.840
Well the dreams and aspirations maybe, but those are 500 years from now.
link |
00:06:07.960
I think that that's like the Greeks sitting there and saying, it would be neat to get
link |
00:06:10.480
to the moon someday.
link |
00:06:12.920
I think we have no clue how the brain does computation.
link |
00:06:16.200
We're just a clueless.
link |
00:06:17.560
We're even worse than the Greeks on most anything interesting scientifically of our era.
link |
00:06:23.600
Can you linger on that just for a moment because you stand not completely unique, but a little
link |
00:06:29.100
bit unique in the clarity of that.
link |
00:06:31.400
Can you elaborate your intuition of why we're, like where we stand in our understanding of
link |
00:06:36.760
the human brain?
link |
00:06:37.760
And a lot of people say, you know, scientists say we're not very far in understanding human
link |
00:06:41.280
brain, but you're like, you're saying we're in the dark here.
link |
00:06:44.560
Well, I know I'm not unique.
link |
00:06:45.960
I don't even think in the clarity, but if you talk to real neuroscientists that really
link |
00:06:49.240
study real synapses or real neurons, they agree, they agree.
link |
00:06:53.480
It's a hundreds of year task and they're building it up slowly and surely.
link |
00:06:58.680
What the signal is there is not clear.
link |
00:07:00.920
We think we have all of our metaphors.
link |
00:07:02.700
We think it's electrical, maybe it's chemical, it's a whole soup, it's ions and proteins
link |
00:07:08.240
and it's a cell.
link |
00:07:09.240
And that's even around like a single synapse.
link |
00:07:11.080
If you look at a electron micrograph of a single synapse, it's a city of its own.
link |
00:07:15.920
And that's one little thing on a dendritic tree, which is extremely complicated electrochemical
link |
00:07:20.800
thing.
link |
00:07:22.000
And it's doing these spikes and voltages are flying around and then proteins are taking
link |
00:07:25.760
that and taking it down into the DNA and who knows what.
link |
00:07:29.440
So it is the problem of the next few centuries.
link |
00:07:31.760
It is fantastic.
link |
00:07:33.360
But we have our metaphors about it.
link |
00:07:34.940
Is it an economic device?
link |
00:07:36.160
Is it like the immune system or is it like a layered set of, you know, arithmetic computations?
link |
00:07:42.040
We have all these metaphors and they're fun.
link |
00:07:44.780
But that's not real science per se.
link |
00:07:48.120
There is neuroscience.
link |
00:07:49.120
That's not neuroscience.
link |
00:07:50.120
All right.
link |
00:07:51.120
That's like the Greek speculating about how to get to the moon, fun, right?
link |
00:07:55.380
And I think that I like to say this fairly strongly because I think a lot of young people
link |
00:07:59.040
think we're on the verge because a lot of people who don't talk about it clearly let
link |
00:08:03.440
it be understood that, yes, we kind of, this is a brain inspired, we're kind of close,
link |
00:08:07.520
you know, breakthroughs are on the horizon.
link |
00:08:10.280
And that's scrupulous people sometimes who need money for their labs.
link |
00:08:13.600
That's what I'm saying, scrupulous, but people will oversell, I need money for my lab, I'm
link |
00:08:18.680
studying computational neuroscience, I'm going to oversell it.
link |
00:08:23.880
And so there's been too much of that.
link |
00:08:25.200
So I'll step into the gray area between metaphor and engineering with, I'm not sure if you're
link |
00:08:32.040
familiar with brain computer interfaces.
link |
00:08:35.520
So a company like Elon Musk has Neuralink that's working on putting electrodes into
link |
00:08:42.240
the brain and trying to be able to read, both read and send electrical signals.
link |
00:08:46.520
Just as you said, even the basic mechanism of communication in the brain is not something
link |
00:08:54.320
we understand.
link |
00:08:55.320
But do you hope without understanding the fundamental principles of how the brain works,
link |
00:09:00.880
we'll be able to do something interesting at that gray area of metaphor?
link |
00:09:06.600
It's not my area.
link |
00:09:07.600
So I hope in the sense, like anybody else hopes for some interesting things to happen
link |
00:09:11.200
from research, I would expect more something like Alzheimer's will get figured out from
link |
00:09:15.600
modern neuroscience.
link |
00:09:16.600
There's a lot of human suffering based on brain disease and we throw things like lithium
link |
00:09:22.560
at the brain, it kind of works, no one has a clue why.
link |
00:09:25.900
That's not quite true, but mostly we don't know.
link |
00:09:28.240
And that's even just about the biochemistry of the brain and how it leads to mood swings
link |
00:09:31.940
and so on.
link |
00:09:33.120
How thought emerges from that, we were really, really completely dim.
link |
00:09:38.160
So that you might want to hook up electrodes and try to do some signal processing on that
link |
00:09:41.540
and try to find patterns, fine, by all means, go for it.
link |
00:09:45.640
It's just not scientific at this point.
link |
00:09:48.740
So it's like kind of sitting in a satellite and watching the emissions from a city and
link |
00:09:53.220
trying to infer things about the microeconomy, even though you don't have microeconomic concepts.
link |
00:09:57.680
It's really that kind of thing.
link |
00:09:59.200
And so yes, can you find some signals that do something interesting or useful?
link |
00:10:02.520
Can you control a cursor or mouse with your brain?
link |
00:10:06.640
Yeah, absolutely, and then I can imagine business models based on that and even medical applications
link |
00:10:13.240
of that.
link |
00:10:14.240
But from there to understanding algorithms that allow us to really tie in deeply from
link |
00:10:19.680
the brain to computer, I just, no, I don't agree with Elon Musk.
link |
00:10:22.580
I don't think that's even, that's not for our generations, not even for the century.
link |
00:10:26.580
So just in hopes of getting you to dream, you've mentioned Kolmogorov and Turing might
link |
00:10:33.580
pop up, do you think that there might be breakthroughs that will get you to sit back in five, 10
link |
00:10:41.120
years and say, wow?
link |
00:10:43.160
Oh, I'm sure there will be, but I don't think that there'll be demos that impress me.
link |
00:10:49.240
I don't think that having a computer call a restaurant and pretend to be a human is
link |
00:10:55.120
a breakthrough.
link |
00:10:56.120
Right.
link |
00:10:57.120
And people, you know, some people present it as such.
link |
00:10:59.840
It's imitating human intelligence.
link |
00:11:01.660
It's even putting coughs in the thing to make a bit of a PR stunt.
link |
00:11:07.400
And so fine that the world runs on those things too.
link |
00:11:11.440
And I don't want to diminish all the hard work and engineering that goes behind things
link |
00:11:14.940
like that and the ultimate value to the human race.
link |
00:11:17.760
But that's not scientific understanding.
link |
00:11:20.520
And I know the people that work on these things, they are after scientific understanding.
link |
00:11:23.880
In the meantime, they've got to kind of, you know, the trains got to run and they got mouths
link |
00:11:26.780
to feed and they got things to do and there's nothing wrong with all that.
link |
00:11:30.460
I would call that though, just engineering.
link |
00:11:32.560
And I want to distinguish that between an engineering field, like electrical engineering
link |
00:11:35.960
and chemical engineering that originally emerged, that had real principles and you really know
link |
00:11:39.360
what you're doing and you had a little scientific understanding, maybe not even complete.
link |
00:11:43.680
So it became more predictable and it really gave value to human life because it was understood.
link |
00:11:49.040
And so we don't want to muddle too much these waters of, you know, what we're able to do
link |
00:11:54.180
versus what we really can't do in a way that's going to impress the next.
link |
00:11:58.080
So I don't need to be wowed, but I think that someone comes along in 20 years, a younger
link |
00:12:02.520
person who's absorbed all the technology and for them to be wowed, I think they have to
link |
00:12:08.400
be more deeply impressed.
link |
00:12:09.400
A young Kolmogorov would not be wowed by some of the stunts that you see right now coming
link |
00:12:13.020
from the big companies.
link |
00:12:14.020
The demos, but do you think the breakthroughs from Kolmogorov would be, and give this question
link |
00:12:19.040
a chance, do you think there'll be in the scientific fundamental principles arena or
link |
00:12:24.120
do you think it's possible to have fundamental breakthroughs in engineering?
link |
00:12:28.400
Meaning, you know, I would say some of the things that Elon Musk is working with SpaceX
link |
00:12:33.200
and then others sort of trying to revolutionize the fundamentals of engineering, of manufacturing,
link |
00:12:39.840
of saying, here's a problem we know how to do a demo of and actually taking it to scale.
link |
00:12:44.480
Yeah.
link |
00:12:45.480
So there's going to be all kinds of breakthroughs.
link |
00:12:46.960
I just don't like that terminology.
link |
00:12:48.280
I'm a scientist and I work on things day in and day out and things move along and eventually
link |
00:12:52.000
you say, wow, something happened, but I don't like that language very much.
link |
00:12:56.400
Also I don't like to prize theoretical breakthroughs over practical ones.
link |
00:13:01.000
I tend to be more of a theoretician and I think there's lots to do in that arena right
link |
00:13:05.080
now.
link |
00:13:06.760
And so I wouldn't point to the Kolmogorovs, I might point to the Edisons of the era and
link |
00:13:09.800
maybe Musk is a bit more like that.
link |
00:13:11.840
But you know, Musk, God bless him, also will say things about AI that he knows very little
link |
00:13:17.440
about and he leads people astray when he talks about things he doesn't know anything about.
link |
00:13:23.840
Trying to program a computer to understand natural language, to be involved in a dialogue
link |
00:13:27.360
we're having right now, that ain't going to happen in our lifetime.
link |
00:13:30.460
You could fake it, you can mimic, sort of take old sentences that humans use and retread
link |
00:13:35.240
them, but the deep understanding of language, no, it's not going to happen.
link |
00:13:38.560
And so from that, I hope you can perceive that the deeper, yet deeper kind of aspects
link |
00:13:42.960
and intelligence are not going to happen.
link |
00:13:44.520
Now will there be breakthroughs?
link |
00:13:45.520
No, I think that Google was a breakthrough, I think Amazon is a breakthrough, you know,
link |
00:13:49.600
I think Uber is a breakthrough, you know, that bring value to human beings at scale
link |
00:13:53.280
in new, brand new ways based on data flows and so on.
link |
00:13:56.880
A lot of these things are slightly broken because there's not kind of an engineering
link |
00:14:01.260
field that takes economic value in context of data and, you know, planetary scale and
link |
00:14:06.680
worries about all the externalities, the privacy, you know, we don't have that field so we don't
link |
00:14:11.240
think these things through very well.
link |
00:14:13.200
I see that as emerging and that will be, you know, looking back from 100 years, that will
link |
00:14:17.560
be a constituted breakthrough in this era, just like electrical engineering was a breakthrough
link |
00:14:21.240
in the early part of the last century and chemical engineering was a breakthrough.
link |
00:14:24.560
So the scale, the markets that you talk about and we'll get to will be seen as sort of breakthrough
link |
00:14:30.360
and we're in the very early days of really doing interesting stuff there and we'll get
link |
00:14:34.360
to that, but just taking a quick step back, can you give, kind of throw off the historian
link |
00:14:40.920
hat.
link |
00:14:41.920
I mean, you briefly said that the history of AI kind of mimics the history of chemical
link |
00:14:47.760
engineering, but...
link |
00:14:49.240
I keep saying machine learning.
link |
00:14:50.360
You keep wanting to say AI, just to let you know, I don't, you know, I resist that.
link |
00:14:54.280
I don't think this is about AI really was John McCarthy as almost a philosopher saying,
link |
00:15:01.080
wouldn't it be cool if we could put thought in a computer?
link |
00:15:03.560
If we could mimic the human capability to think or put intelligence in, in some sense
link |
00:15:08.040
into a computer.
link |
00:15:09.960
That's an interesting philosophical question and he wanted to make it more than philosophy.
link |
00:15:13.560
He wanted to actually write down a logical formula and algorithms that would do that.
link |
00:15:17.340
And that is a perfectly valid, reasonable thing to do.
link |
00:15:20.180
That's not what's happening in this era.
link |
00:15:23.080
So the reason I keep saying AI actually, and I'd love to hear what you think about it.
link |
00:15:27.760
Machine learning has a very particular set of methods and tools.
link |
00:15:34.640
Maybe your version of it is that mine doesn't, it's very, very open.
link |
00:15:37.720
It does optimization, it does sampling, it does...
link |
00:15:40.160
So systems that learn is what machine learning is.
link |
00:15:42.920
Systems that learn and make decisions.
link |
00:15:44.560
And make decisions.
link |
00:15:45.560
So it's not just pattern recognition and, you know, finding patterns, it's all about
link |
00:15:49.080
making decisions in real worlds and having close feedback loops.
link |
00:15:52.560
So something like symbolic AI, expert systems, reasoning systems, knowledge based representation,
link |
00:15:58.400
all of those kinds of things, search, does that neighbor fit into what you think of as
link |
00:16:03.760
machine learning?
link |
00:16:04.760
So I don't even like the word machine learning, I think that what the field you're talking
link |
00:16:07.560
about is all about making large collections of decisions under uncertainty by large collections
link |
00:16:11.720
of entities.
link |
00:16:12.720
Right?
link |
00:16:13.720
And there are principles for that, at that scale.
link |
00:16:16.100
You don't have to say the principles are for a single entity that's making decisions, single
link |
00:16:19.040
agent or single human.
link |
00:16:20.560
It really immediately goes to the network of decisions.
link |
00:16:22.600
Is a good word for that or no?
link |
00:16:24.080
No, there's no good words for any of this.
link |
00:16:25.400
That's kind of part of the problem.
link |
00:16:27.240
So we can continue the conversation to use AI for all that.
link |
00:16:29.920
I just want to kind of raise the flag here that this is not about, we don't know what
link |
00:16:35.520
intelligence is and real intelligence.
link |
00:16:38.140
We don't know much about abstraction and reasoning at the level of humans.
link |
00:16:41.000
We don't have a clue.
link |
00:16:42.000
We're not trying to build that because we don't have a clue.
link |
00:16:44.720
Eventually it may emerge.
link |
00:16:45.720
They'll make, I don't know if there'll be breakthroughs, but eventually we'll start
link |
00:16:48.280
to get glimmers of that.
link |
00:16:50.160
It's not what's happening right now.
link |
00:16:51.480
Okay.
link |
00:16:52.480
We're taking data.
link |
00:16:53.480
We're trying to make good decisions based on that.
link |
00:16:54.560
We're trying to scale.
link |
00:16:55.560
We're trying to economically viably, we're trying to build markets.
link |
00:16:58.260
We're trying to keep value at that scale and aspects of this will look intelligent.
link |
00:17:04.680
Computers were so dumb before, they will seem more intelligent.
link |
00:17:08.120
We will use that buzzword of intelligence so we can use it in that sense.
link |
00:17:12.320
So machine learning, you can scope it narrowly as just learning from data and pattern recognition.
link |
00:17:17.960
But when I talk about these topics, maybe data science is another word you could throw
link |
00:17:22.140
in the mix, it really is important that the decisions are as part of it.
link |
00:17:26.880
It's consequential decisions in the real world.
link |
00:17:28.760
Am I going to have a medical operation?
link |
00:17:30.880
Am I going to drive down the street?
link |
00:17:33.480
Things where there's scarcity, things that impact other human beings or other environments
link |
00:17:38.240
and so on.
link |
00:17:39.400
How do I do that based on data?
link |
00:17:40.800
How do I do that adaptively?
link |
00:17:41.800
How do I use computers to help those kinds of things go forward?
link |
00:17:44.160
Whatever you want to call that.
link |
00:17:45.640
So let's call it AI.
link |
00:17:46.640
Let's agree to call it AI, but let's not say that the goal of that is intelligence.
link |
00:17:52.960
The goal of that is really good working systems at planetary scale that we've never seen before.
link |
00:17:56.640
So reclaim the word AI from the Dartmouth conference from many decades ago of the dream
link |
00:18:00.800
of humans.
link |
00:18:01.800
I don't want to reclaim it.
link |
00:18:02.800
I want a new word.
link |
00:18:03.800
I think it was a bad choice.
link |
00:18:04.800
I mean, if you read one of my little things, the history was basically that McCarthy needed
link |
00:18:09.820
a new name because cybernetics already existed and he didn't like, no one really liked Norbert
link |
00:18:14.800
Wiener.
link |
00:18:15.800
Norbert Wiener was kind of an island to himself and he felt that he had encompassed all this
link |
00:18:19.560
and in some sense he did.
link |
00:18:21.200
You look at the language of cybernetics, it was everything we're talking about.
link |
00:18:24.400
It was control theory and signal processing and some notions of intelligence and closed
link |
00:18:28.200
feedback loops and data.
link |
00:18:29.440
It was all there.
link |
00:18:30.960
It's just not a word that lived on partly because of the maybe the personalities.
link |
00:18:34.240
But McCarthy needed a new word to say, I'm different from you.
link |
00:18:36.720
I'm not part of your show.
link |
00:18:38.400
I got my own.
link |
00:18:40.080
Invented this word and again, thinking forward about the movies that would be made about
link |
00:18:46.240
it, it was a great choice.
link |
00:18:48.680
But thinking forward about creating a sober academic and real world discipline, it was
link |
00:18:52.000
a terrible choice because it led to promises that are not true that we understand.
link |
00:18:56.320
We understand artificial perhaps, but we don't understand intelligence.
link |
00:18:58.880
It's a small tangent because you're one of the great personalities of machine learning,
link |
00:19:03.360
whatever the heck you call the field.
link |
00:19:06.400
Do you think science progresses by personalities or by the fundamental principles and theories
link |
00:19:11.880
and research that's outside of personalities?
link |
00:19:15.080
Both.
link |
00:19:16.080
And I wouldn't say there should be one kind of personality.
link |
00:19:17.560
I have mine and I have my preferences and I have a kind of network around me that feeds
link |
00:19:23.200
me and some of them agree with me and some of them disagree, but all kinds of personalities
link |
00:19:26.680
are needed.
link |
00:19:28.480
Right now, I think the personality that it's a little too exuberant, a little bit too ready
link |
00:19:31.680
to promise the moon is a little bit too much in ascendance.
link |
00:19:35.840
And I do think that there's some good to that.
link |
00:19:38.160
It certainly attracts lots of young people to our field, but a lot of those people come
link |
00:19:41.580
in with strong misconceptions and they have to then unlearn those and then find something
link |
00:19:47.400
to do.
link |
00:19:48.880
And so I think there's just got to be some multiple voices and I wasn't hearing enough
link |
00:19:52.920
of the more sober voice.
link |
00:19:54.840
So as a continuation of a fun tangent and speaking of vibrant personalities, what would
link |
00:20:02.160
you say is the most interesting disagreement you have with Jan Lacune?
link |
00:20:07.400
So Jan's an old friend and I just say that I don't think we disagree about very much
link |
00:20:12.520
really.
link |
00:20:13.520
He and I both kind of have a let's build it kind of mentality and does it work kind of
link |
00:20:18.800
mentality and kind of concrete.
link |
00:20:21.360
We both speak French and we speak French more together and we have a lot in common.
link |
00:20:27.120
And so if one wanted to highlight a disagreement, it's not really a fundamental one.
link |
00:20:31.800
I think it's just kind of what we're emphasizing.
link |
00:20:35.200
Jan has emphasized pattern recognition and has emphasized prediction.
link |
00:20:43.440
And it's interesting to try to take that as far as you can.
link |
00:20:45.320
If you could do perfect prediction, what would that give you kind of as a thought experiment?
link |
00:20:50.600
And I think that's way too limited.
link |
00:20:55.200
We cannot do perfect prediction.
link |
00:20:56.640
We will never have the data sets that allow me to figure out what you're about ready to
link |
00:20:59.360
do, what question you're going to ask next.
link |
00:21:00.760
I have no clue.
link |
00:21:01.760
I will never know such things.
link |
00:21:03.320
Moreover, most of us find ourselves during the day in all kinds of situations we had
link |
00:21:07.520
no anticipation of that are kind of very, very novel in various ways.
link |
00:21:13.480
And in that moment, we want to think through what we want.
link |
00:21:16.340
And also there's going to be market forces acting on us.
link |
00:21:19.240
I'd like to go down that street, but now it's full because there's a crane in the street.
link |
00:21:22.320
I got it.
link |
00:21:23.320
I got to think about that.
link |
00:21:24.320
I got to think about what I might really want here.
link |
00:21:26.240
And I got to sort of think about how much it costs me to do this action versus this
link |
00:21:29.520
action.
link |
00:21:30.520
I got to think about the risks involved.
link |
00:21:32.800
A lot of our current pattern recognition and prediction systems don't do any risk evaluations.
link |
00:21:37.000
They have no error bars, right?
link |
00:21:39.000
I got to think about other people's decisions around me.
link |
00:21:41.080
I got to think about a collection of my decisions, even just thinking about like a medical treatment,
link |
00:21:45.560
you know, I'm not going to take a, the prediction of a neural net about my health, about something
link |
00:21:50.480
consequential.
link |
00:21:51.480
I'm not about ready to have a heart attack because some number is over 0.7.
link |
00:21:54.580
Even if you had all the data in the world that ever been collected about heart attacks
link |
00:21:58.920
better than any doctor ever had, I'm not going to trust the output of that neural net to
link |
00:22:02.640
predict my heart attack.
link |
00:22:03.640
I'm going to want to ask what if questions around that.
link |
00:22:06.400
I'm going to want to look at some us or other possible data I didn't have, causal things.
link |
00:22:10.360
I'm going to want to have a dialogue with a doctor about things we didn't think about
link |
00:22:13.680
when he gathered the data.
link |
00:22:15.480
You know, I could go on and on.
link |
00:22:16.640
I hope you can see.
link |
00:22:17.900
And I don't, I think that if you say predictions, everything that, that, that you're missing
link |
00:22:21.520
all of this stuff.
link |
00:22:23.520
And so prediction plus decision making is everything, but both of them are equally important.
link |
00:22:28.240
And so the field has emphasized prediction, Jan rightly so has seen how powerful that
link |
00:22:32.520
is.
link |
00:22:33.660
But at the cost of people not being aware that decision making is where the rubber really
link |
00:22:37.240
hits the road, where human lives are at stake, where risks are being taken, where you got
link |
00:22:41.440
to gather more data.
link |
00:22:42.440
You got to think about the error bars.
link |
00:22:43.640
You got to think about the consequences of your decisions on others.
link |
00:22:45.920
You got to think about the economy around your decisions, blah, blah, blah, blah.
link |
00:22:48.960
I'm not the only one working on those, but we're a smaller tribe.
link |
00:22:52.120
And right now we're not the one that people talk about the most.
link |
00:22:56.400
But you know, if you go out in the real world and industry, you know, at Amazon, I'd say
link |
00:23:00.460
half the people there are working on decision making and the other half are doing, you know,
link |
00:23:03.720
the pattern recognition.
link |
00:23:04.720
It's important.
link |
00:23:05.720
And the words of pattern recognition and prediction, I think the distinction there, not to linger
link |
00:23:10.160
on words, but the distinction there is more a constrained sort of in the lab data set
link |
00:23:16.120
versus decision making is talking about consequential decisions in the real world, under the messiness
link |
00:23:21.160
and the uncertainty of the real world.
link |
00:23:23.760
And just the whole of it, the whole mess of it that actually touches human beings and
link |
00:23:27.480
scale.
link |
00:23:28.480
And the forces, that's the distinction.
link |
00:23:31.120
It helps add those, that perspective, that broader perspective.
link |
00:23:33.840
You're right.
link |
00:23:34.840
I totally agree.
link |
00:23:35.840
On the other hand, if you're a real prediction person, of course, you want it to be in the
link |
00:23:38.120
real world.
link |
00:23:39.120
You want to predict real world events.
link |
00:23:40.120
I'm just saying that's not possible with just data sets.
link |
00:23:43.200
That it has to be in the context of, you know, strategic things that someone's doing, data
link |
00:23:47.520
they might gather, things they could have gathered, the reasoning process around data.
link |
00:23:50.880
It's not just taking data and making predictions based on the data.
link |
00:23:53.580
So one of the things that you're working on, I'm sure there's others working on it, but
link |
00:23:58.280
I don't hear often it talked about, especially in the clarity that you talk about it, and
link |
00:24:04.960
I think it's both the most exciting and the most concerning area of AI in terms of decision
link |
00:24:10.600
making.
link |
00:24:11.600
So you've talked about AI systems that help make decisions that scale in a distributed
link |
00:24:15.400
way, millions, billions decisions, sort of markets of decisions.
link |
00:24:19.720
Can you, as a starting point, sort of give an example of a system that you think about
link |
00:24:24.920
when you're thinking about these kinds of systems?
link |
00:24:27.240
Yeah, so first of all, you're absolutely getting into some territory, which I will be beyond
link |
00:24:31.400
my expertise.
link |
00:24:32.400
And there are lots of things that are going to be very not obvious to think about.
link |
00:24:35.720
Just like, again, I like to think about history a little bit, but think about put yourself
link |
00:24:39.920
back in the sixties.
link |
00:24:40.920
There was kind of a banking system that wasn't computerized really.
link |
00:24:43.440
There was database theory emerging and database people had to think about how do I actually
link |
00:24:48.160
not just move data around, but actual money and have it be, you know, valid and have transactions
link |
00:24:53.560
that ATMs happen that are actually, you know, all valid and so on and so forth.
link |
00:24:57.840
So that's the kind of issues you get into when you start to get serious about sorts
link |
00:25:01.560
of things like this.
link |
00:25:02.960
I like to think about as kind of almost a thought experiment to help me think something
link |
00:25:07.240
simpler, which is the music market.
link |
00:25:11.160
And because there is, to first order, there is no music market in the world right now
link |
00:25:16.160
and in our country, for sure.
link |
00:25:18.740
There are something called things called record companies and they make money and they prop
link |
00:25:23.720
up a few really good musicians and make them superstars and they all make huge amounts
link |
00:25:29.480
of money.
link |
00:25:30.980
But there's a long tail of huge numbers of people that make lots and lots of really good
link |
00:25:33.820
music that is actually listened to by more people than the famous people.
link |
00:25:40.560
They are not in a market.
link |
00:25:41.560
They cannot have a career.
link |
00:25:42.820
They do not make money.
link |
00:25:43.880
The creators, the creators, the creators, the so called influencers or whatever that
link |
00:25:47.760
diminishes who they are.
link |
00:25:49.340
So there are people who make extremely good music, especially in the hip hop or Latin
link |
00:25:53.360
world these days.
link |
00:25:55.260
They do it on their laptop.
link |
00:25:56.320
That's what they do on the weekend and they have another job during the week and they
link |
00:26:01.040
put it up on SoundCloud or other sites.
link |
00:26:03.920
Eventually it gets streamed.
link |
00:26:04.920
It now gets turned into bits.
link |
00:26:06.140
It's not economically valuable.
link |
00:26:07.720
The information is lost.
link |
00:26:08.980
It gets put up there.
link |
00:26:10.200
People stream it.
link |
00:26:11.580
You walk around in a big city, you see people with headphones, especially young kids listening
link |
00:26:16.240
to music all the time.
link |
00:26:17.240
If you look at the data, very little of the music they are listening to is the famous
link |
00:26:21.080
people's music and none of it's old music.
link |
00:26:23.120
It's all the latest stuff.
link |
00:26:24.360
But the people who made that latest stuff are like some 16 year old somewhere who will
link |
00:26:27.480
never make a career out of this, who will never make money.
link |
00:26:29.600
Of course there will be a few counter examples.
link |
00:26:31.480
The record companies incentivize to pick out a few and highlight them.
link |
00:26:35.360
Long story short, there's a missing market there.
link |
00:26:37.720
There is not a consumer producer relationship at the level of the actual creative acts.
link |
00:26:43.480
The pipelines and Spotify's of the world that take this stuff and stream it along, they
link |
00:26:48.200
make money off of subscriptions or advertising and those things.
link |
00:26:51.160
They're making the money.
link |
00:26:52.160
All right.
link |
00:26:53.160
And then they will offer bits and pieces of it to a few people again to highlight that
link |
00:26:55.800
they simulate a market.
link |
00:26:58.640
Anyway, a real market would be if you're a creator of music that you actually are somebody
link |
00:27:03.560
who's good enough that people want to listen to you, you should have the data available
link |
00:27:07.440
to you.
link |
00:27:08.440
There should be a dashboard showing a map of the United States.
link |
00:27:11.480
So in last week, here's all the places your songs were listened to.
link |
00:27:14.680
It should be transparent, vetable, so that if someone down in Providence sees that you're
link |
00:27:20.520
being listened to 10,000 times in Providence, that they know that's real data.
link |
00:27:24.160
You know it's real data.
link |
00:27:25.320
They will have you come give a show down there.
link |
00:27:27.300
They will broadcast to the people who've been listening to you that you're coming.
link |
00:27:30.040
If you do this right, you could go down there and make $20,000.
link |
00:27:34.480
You do that three times a year, you start to have a career.
link |
00:27:37.100
So in this sense, AI creates jobs.
link |
00:27:39.600
It's not about taking away human jobs.
link |
00:27:40.680
It's creating new jobs because it creates a new market.
link |
00:27:43.480
Once you've created a market, you've now connected up producers and consumers.
link |
00:27:46.800
The person who's making the music can say to someone who comes to their shows a lot,
link |
00:27:50.000
hey, I'll play at your daughter's wedding for $10,000.
link |
00:27:53.200
You'll say 8,000.
link |
00:27:54.200
They'll say 9,000.
link |
00:27:55.200
Then again, you can now get an income up to $100,000.
link |
00:27:59.000
You're not going to be a millionaire.
link |
00:28:01.920
And now even think about really the value of music is in these personal connections,
link |
00:28:06.900
even so much so that a young kid wants to wear a tshirt with their favorite musician's
link |
00:28:13.180
signature on it.
link |
00:28:14.840
So if they listen to the music on the internet, the internet should be able to provide them
link |
00:28:18.080
with a button that they push and the merchandise arrives the next day.
link |
00:28:21.840
We can do that.
link |
00:28:23.000
And now why should we do that?
link |
00:28:24.400
Well, because the kid who bought the shirt will be happy, but more the person who made
link |
00:28:27.560
the music will get the money.
link |
00:28:29.080
There's no advertising needed.
link |
00:28:32.360
So you can create markets between producers and consumers, take 5% cut.
link |
00:28:36.460
Your company will be perfectly sound.
link |
00:28:39.200
It'll go forward into the future and it will create new markets and that raises human happiness.
link |
00:28:45.080
Now this seems like, well, this is easy, just create this dashboard, kind of create some
link |
00:28:48.280
connections and all that.
link |
00:28:49.280
But if you think about Uber or whatever, you think about the challenges in the real world
link |
00:28:52.900
of doing things like this, and there are actually new principles going to be needed.
link |
00:28:56.180
You're trying to create a new kind of two way market at a different scale that's ever
link |
00:28:59.080
been done before.
link |
00:29:00.080
There's going to be unwanted aspects of the market.
link |
00:29:04.720
There'll be bad people.
link |
00:29:05.720
There'll be the data will get used in the wrong ways, it'll fail in some ways, it won't
link |
00:29:10.880
deliver about.
link |
00:29:11.880
You have to think that through.
link |
00:29:12.880
Just like anyone who ran a big auction or ran a big matching service in economics will
link |
00:29:17.240
think these things through.
link |
00:29:18.880
And so that maybe doesn't get at all the huge issues that can arise when you start to create
link |
00:29:22.520
markets, but it starts to, at least for me, solidify my thoughts and allow me to move
link |
00:29:26.760
forward in my own thinking.
link |
00:29:28.080
Yeah.
link |
00:29:29.080
So I talked to the head of research at Spotify actually, and I think their longterm goal,
link |
00:29:32.840
they've said, is to have at least one million creators make a comfortable living putting
link |
00:29:39.920
on Spotify.
link |
00:29:41.120
So I think you articulate a really nice vision of the world and the digital and the cyberspace
link |
00:29:52.160
of markets.
link |
00:29:53.920
What do you think companies like Spotify or YouTube or Netflix can do to create such markets?
link |
00:30:04.100
Is it an AI problem?
link |
00:30:05.400
Is it an interface problem for interface design?
link |
00:30:08.600
Is it some other kind of, is it an economics problem?
link |
00:30:13.440
Who should they hire to solve these problems?
link |
00:30:15.600
Well, part of it's not just top down.
link |
00:30:17.480
So the Silicon Valley has this attitude that they know how to do it.
link |
00:30:20.000
They will create the system just like Google did with the search box that will be so good
link |
00:30:23.380
that they'll just, everyone will adopt that.
link |
00:30:27.000
It's everything you said, but really I think missing that kind of culture.
link |
00:30:31.500
So it's literally that 16 year old who's able to create the songs.
link |
00:30:34.800
You don't create that as a Silicon Valley entity.
link |
00:30:37.000
You don't hire them per se.
link |
00:30:39.340
You have to create an ecosystem in which they are wanted and that they belong.
link |
00:30:44.320
And so you have to have some cultural credibility to do things like this.
link |
00:30:47.680
Netflix, to their credit, wanted some of that credibility and they created shows, content.
link |
00:30:53.060
They call it content.
link |
00:30:54.060
It's such a terrible word, but it's culture.
link |
00:30:56.880
And so with movies, you can kind of go give a large sum of money to somebody graduating
link |
00:31:01.160
from the USC film school.
link |
00:31:03.440
It's a whole thing of its own, but it's kind of like rich white people's thing to do.
link |
00:31:07.880
And American culture has not been so much about rich white people.
link |
00:31:11.760
It's been about all the immigrants, all the Africans who came and brought that culture
link |
00:31:16.580
and those rhythms to this world and created this whole new thing.
link |
00:31:23.040
American culture.
link |
00:31:24.040
And so companies can't artificially create that.
link |
00:31:26.800
They can't just say, hey, we're here.
link |
00:31:28.440
We're going to buy it up.
link |
00:31:29.440
You've got a partner.
link |
00:31:31.440
And so anyway, not to denigrate, these companies are all trying and they should, and I'm sure
link |
00:31:37.520
they're asking these questions and some of them are even making an effort.
link |
00:31:40.160
But it is partly a respect the culture as a technology person.
link |
00:31:44.400
You've got to blend your technology with cultural meaning.
link |
00:31:49.880
How much of a role do you think the algorithm, so machine learning has in connecting the
link |
00:31:54.400
consumer to the creator, sort of the recommender system aspect of this?
link |
00:31:59.600
Yeah.
link |
00:32:00.600
It's a great question.
link |
00:32:01.600
I think pretty high.
link |
00:32:02.600
There's no magic in the algorithms, but a good recommender system is way better than
link |
00:32:07.320
a bad recommender system.
link |
00:32:09.160
And recommender systems is a billion dollar industry back even 10, 20 years ago.
link |
00:32:15.180
And it continues to be extremely important going forward.
link |
00:32:17.540
What's your favorite recommender system, just so we can put something, well, just historically
link |
00:32:20.680
I was one of the, when I first went to Amazon, I first didn't like Amazon because they put
link |
00:32:24.800
the book people out of business or the library, the local booksellers went out of business.
link |
00:32:30.400
I've come to accept that there probably are more books being sold now and poor people
link |
00:32:34.620
reading them than ever before.
link |
00:32:36.920
And then local book stores are coming back.
link |
00:32:39.440
So that's how economics sometimes work.
link |
00:32:41.540
You go up and you go down.
link |
00:32:44.280
But anyway, when I finally started going there and I bought a few books, I was really pleased
link |
00:32:48.760
to see another few books being recommended to me that I never would have thought of.
link |
00:32:52.400
And I bought a bunch of them.
link |
00:32:53.400
So they obviously had a good business model.
link |
00:32:55.320
But I learned things and I still to this day kind of browse using that service.
link |
00:33:00.980
And I think lots of people get a lot, that is a good aspect of a recommendation system.
link |
00:33:05.760
I'm learning from my peers in an indirect way.
link |
00:33:10.480
And their algorithms are not meant to have them impose what we learn.
link |
00:33:13.880
It really is trying to find out what's in the data.
link |
00:33:16.680
It doesn't work so well for other kinds of entities, but that's just the complexity of
link |
00:33:19.680
human life.
link |
00:33:20.680
Like shirts, I'm not going to get recommendations on shirts, but that's interesting.
link |
00:33:26.440
If you try to recommend restaurants, it's hard.
link |
00:33:32.160
It's hard to do it at scale.
link |
00:33:35.400
But a blend of recommendation systems with other economic ideas, matchings and so on
link |
00:33:42.080
is really, really still very open research wise.
link |
00:33:45.240
And there's new companies that are going to emerge that do that well.
link |
00:33:48.680
What do you think is going to the messy, difficult land of say politics and things like that,
link |
00:33:54.480
that YouTube and Twitter have to deal with in terms of recommendation systems?
link |
00:33:58.480
Being able to suggest, I think Facebook just launched Facebook news.
link |
00:34:03.120
So recommend the kind of news that are most likely for you to be interesting.
link |
00:34:08.920
Do you think this is AI solvable, again, whatever term we want to use, do you think it's a solvable
link |
00:34:14.520
problem for machines or is it a deeply human problem that's unsolvable?
link |
00:34:18.760
So I don't even think about it at that level.
link |
00:34:20.240
I think that what's broken with some of these companies, it's all monetization by advertising.
link |
00:34:25.400
They're not, at least Facebook, I want to critique them, but they didn't really try
link |
00:34:29.200
to connect a producer and a consumer in an economic way, right?
link |
00:34:32.680
No one wants to pay for anything.
link |
00:34:34.700
And so they all, you know, starting with Google and Facebook, they went back to the playbook
link |
00:34:38.420
of, you know, the television companies back in the day.
link |
00:34:41.440
No one wanted to pay for this signal.
link |
00:34:43.200
They will pay for the TV box, but not for the signal, at least back in the day.
link |
00:34:47.200
And so advertising kind of filled that gap and advertising was new and interesting and
link |
00:34:50.400
it somehow didn't take over our lives quite, right?
link |
00:34:54.400
Fast forward, Google provides a service that people don't want to pay for.
link |
00:34:59.880
And so somewhat surprisingly in the nineties, they made, they ended up making huge amounts
link |
00:35:04.120
so they cornered the advertising market.
link |
00:35:05.600
It didn't seem like that was going to happen, at least to me.
link |
00:35:08.400
These little things on the right hand side of the screen just did not seem all that economically
link |
00:35:11.720
interesting, but that companies had maybe no other choice.
link |
00:35:14.360
The TV market was going away and billboards and so on.
link |
00:35:17.800
So they've, they got it.
link |
00:35:19.860
And I think that sadly that Google just has, it was doing so well with that at making such
link |
00:35:24.880
money.
link |
00:35:25.880
They didn't think much more about how, wait a minute, is there a producer consumer relationship
link |
00:35:28.700
to be set up here?
link |
00:35:29.700
Not just between us and the advertisers market to be created.
link |
00:35:32.840
Is there an actual market between the producer consumer?
link |
00:35:35.160
They're the producers, the person who created that video clip, the person that made that
link |
00:35:38.240
website, the person who could make more such things, the person who could adjust it as
link |
00:35:42.000
a function of demand, the person on the other side who's asking for different kinds of things,
link |
00:35:46.800
you know?
link |
00:35:47.800
So you see glimmers of that now there's influencers and there's kind of a little glimmering of
link |
00:35:51.320
a market, but it should have been done 20 years ago.
link |
00:35:53.480
It should have been thought about.
link |
00:35:54.480
It should have been created in parallel with the advertising ecosystem.
link |
00:35:58.400
And then Facebook inherited that.
link |
00:35:59.860
And I think they also didn't think very much about that.
link |
00:36:03.160
So fast forward and now they are making huge amounts of money off of advertising.
link |
00:36:07.960
And the news thing and all these clicks is just feeding the advertising.
link |
00:36:11.560
It's all connected up to the advertiser.
link |
00:36:13.640
So you want more people to click on certain things because that money flows to you, Facebook.
link |
00:36:18.580
You're very much incentivized to do that.
link |
00:36:20.000
And when you start to find it's breaking, people are telling you, well, we're getting
link |
00:36:23.480
into some troubles.
link |
00:36:24.480
You try to adjust it with your smart AI algorithms, right?
link |
00:36:27.580
And figure out what are bad clicks.
link |
00:36:28.960
So maybe it shouldn't be click through rate, it should be something else.
link |
00:36:31.040
I find that pretty much hopeless.
link |
00:36:34.080
It does get into all the complexity of human life and you can try to fix it.
link |
00:36:37.400
You should, but you could also fix the whole business model.
link |
00:36:40.840
And the business model is that really, what are, are there some human producers and consumers
link |
00:36:44.600
out there?
link |
00:36:45.600
Is there some economic value to be liberated by connecting them directly?
link |
00:36:48.760
Is it such that it's so valuable that people will be able to pay for it?
link |
00:36:52.640
All right.
link |
00:36:53.640
And micro payments, like small payments.
link |
00:36:54.640
Micro, but even have to be micro.
link |
00:36:56.620
So I like the example, suppose I'm going, next week I'm going to India.
link |
00:37:00.120
Never been to India before.
link |
00:37:01.120
Right?
link |
00:37:02.120
I have a couple of days in Mumbai, I have no idea what to do there.
link |
00:37:06.560
Right?
link |
00:37:07.560
And I could go on the web right now and search.
link |
00:37:08.880
It's going to be kind of hopeless.
link |
00:37:10.080
I'm not going to find, you know, I have lots of advertisers in my face.
link |
00:37:14.080
Right?
link |
00:37:15.080
What I really want to do is broadcast to the world that I am going to Mumbai and have someone
link |
00:37:19.320
on the other side of a market look at me and, and there's a recommendation system there.
link |
00:37:24.000
So I'm not looking at all possible people coming to Mumbai.
link |
00:37:26.040
They're looking at the people who are relevant to them.
link |
00:37:27.680
So someone in my age group, someone who kind of knows me in some level, I give up a little
link |
00:37:32.480
privacy by that, but I'm happy because what I'm going to get back is this person can make
link |
00:37:35.720
a little video for me, or they're going to write a little two page paper on here's the
link |
00:37:39.320
cool things that you want to do and move by this week, especially, right?
link |
00:37:43.160
I'm going to look at that.
link |
00:37:44.160
I'm not going to pay a micro payment.
link |
00:37:45.160
I'm going to pay, you know, a hundred dollars or whatever for that.
link |
00:37:48.000
It's real value.
link |
00:37:49.000
It's like journalism.
link |
00:37:50.000
Um, and as an honest subscription, it's that I'm going to pay that person in that moment.
link |
00:37:54.920
Company's going to take 5% of that.
link |
00:37:56.680
And that person has now got it.
link |
00:37:57.760
It's a gig economy, if you will, but you know, done for it, you know, thinking about a little
link |
00:38:01.240
bit behind YouTube, there was actually people who could make more of those things.
link |
00:38:05.000
If they were connected to a market, they would make more of those things independently.
link |
00:38:07.960
You don't have to tell them what to do.
link |
00:38:08.960
You don't have to incentivize them any other way.
link |
00:38:11.680
Um, and so, yeah, these companies, I don't think have thought long and hard about that.
link |
00:38:15.760
So I do distinguish on Facebook on the one side, who just not thought about these things
link |
00:38:20.160
at all.
link |
00:38:21.160
I think, uh, thinking that AI will fix everything, uh, and Amazon thinks about them all the time
link |
00:38:25.200
because they were already out in the real world.
link |
00:38:26.520
They were delivering packages, people's doors.
link |
00:38:28.080
They were, they were worried about a market.
link |
00:38:29.400
They were worried about sellers and, you know, they worry and some things they do are great.
link |
00:38:32.600
Some things maybe not so great, but you know, they're in that business model.
link |
00:38:36.440
And then I'd say Google sort of hovers somewhere in between.
link |
00:38:38.360
I don't, I don't think for a long, long time they got it.
link |
00:38:41.440
I think they probably see that YouTube is more pregnant with possibility than, than,
link |
00:38:45.720
than they might've thought and that they're probably heading that direction.
link |
00:38:49.120
Um, but uh, you know, Silicon Valley has been dominated by the Google Facebook kind of mentality
link |
00:38:54.000
and the subscription and advertising and that is, that's the core problem, right?
link |
00:38:58.800
The fake news actually rides on top of that because it means that you're monetizing with
link |
00:39:03.640
clip through rate and that is the core problem.
link |
00:39:05.600
You got to remove that.
link |
00:39:06.880
So advertisement, if we're going to linger on that, I mean, that's an interesting thesis.
link |
00:39:11.200
I don't know if everyone really deeply thinks about that.
link |
00:39:15.060
So you're right.
link |
00:39:16.720
The thought is the advertising model is the only thing we have, the only thing we'll ever
link |
00:39:20.960
have.
link |
00:39:21.960
We have to fix, we have to build algorithms that despite that business model, you know,
link |
00:39:30.240
find the better angels of our nature and do good by society and by the individual.
link |
00:39:34.680
But you think we can slowly, you think, first of all, there's a difference between should
link |
00:39:40.000
and could.
link |
00:39:42.040
So you're saying we should slowly move away from the advertising model and have a direct
link |
00:39:46.600
connection between the consumer and the creator.
link |
00:39:49.920
The question I also have is, can we, because the advertising model is so successful now
link |
00:39:55.240
in terms of just making a huge amount of money and therefore being able to build a big company
link |
00:40:00.400
that provides, has really smart people working that create a good service.
link |
00:40:03.920
Do you think it's possible?
link |
00:40:05.680
And just to clarify, you think we should move away?
link |
00:40:07.880
Well, I think we should.
link |
00:40:08.880
Yeah.
link |
00:40:09.880
But we is the, you know, me.
link |
00:40:10.880
So society.
link |
00:40:11.880
Yeah.
link |
00:40:12.880
Well, the companies, I mean, so first of all, full disclosure, I'm doing a day a week at
link |
00:40:16.360
Amazon because I kind of want to learn more about how they do things.
link |
00:40:18.840
So, you know, I'm not speaking for Amazon in any way, but, you know, I did go there
link |
00:40:22.760
because I actually believe they get a little bit of this or trying to create these markets.
link |
00:40:26.240
And they don't really use, advertising is not a crucial part of it.
link |
00:40:29.520
Well, that's a good question.
link |
00:40:30.520
So it has become not crucial, but it's become more and more present if you go to Amazon
link |
00:40:34.840
website.
link |
00:40:35.840
And, you know, without revealing too many deep secrets about Amazon, I can tell you
link |
00:40:38.840
that, you know, a lot of people in the company question this and there's a huge questioning
link |
00:40:42.480
going on.
link |
00:40:43.620
You do not want a world where there's zero advertising.
link |
00:40:45.660
That actually is a bad world.
link |
00:40:47.160
Okay.
link |
00:40:48.160
So here's a way to think about it.
link |
00:40:49.280
You're a company that like Amazon is trying to bring products to customers, right?
link |
00:40:55.000
And the customer, at any given moment, you want to buy a vacuum cleaner, say, you want
link |
00:40:58.360
to know what's available for me.
link |
00:40:59.360
And, you know, it's not going to be that obvious.
link |
00:41:00.840
You have to do a little bit of work at it.
link |
00:41:02.160
The recommendation system will sort of help, right?
link |
00:41:04.600
But now suppose this other person over here has just made the world, you know, they spent
link |
00:41:08.080
a huge amount of energy.
link |
00:41:09.080
They had a great idea.
link |
00:41:10.080
They made a great vacuum cleaner.
link |
00:41:11.080
They know they really did it.
link |
00:41:12.400
They nailed it.
link |
00:41:13.400
It's an MIT, you know, whiz kid that made a great new vacuum cleaner, right?
link |
00:41:16.680
It's not going to be in the recommendation system.
link |
00:41:18.240
No one will know about it.
link |
00:41:19.280
The algorithms will not find it and AI will not fix that.
link |
00:41:22.440
Okay.
link |
00:41:23.440
At all.
link |
00:41:24.440
Right.
link |
00:41:25.440
How do you allow that vacuum cleaner to start to get in front of people, be sold well advertising.
link |
00:41:30.660
And here, what advertising is, it's a signal that you're, you believe in your product enough
link |
00:41:35.360
that you're willing to pay some real money for it.
link |
00:41:37.480
And to me as a consumer, I look at that signal.
link |
00:41:39.480
I say, well, first of all, I know these are not just cheap little ads cause we have now
link |
00:41:43.240
right now there.
link |
00:41:44.240
I know that, you know, these are super cheap, you know, pennies.
link |
00:41:47.740
If I see an ad where it's actually, I know the company is only doing a few of these and
link |
00:41:51.120
they're making, you know, real money is kind of flowing and I see an ad, I may pay more
link |
00:41:54.520
attention to it.
link |
00:41:55.520
And I actually might want that because I see, Hey, that guy spent money on his vacuum cleaner.
link |
00:42:01.600
Maybe there's something good there.
link |
00:42:02.600
So I will look at it.
link |
00:42:03.600
And so that's part of the overall information flow in a good market.
link |
00:42:06.620
So advertising has a role, but the problem is of course that that signal is now completely
link |
00:42:11.720
gone because it just, you know, dominant by these tiny little things that add up to big
link |
00:42:15.800
money for the company, you know?
link |
00:42:17.740
So I think it will just, I think it will change because the societies just don't, you know,
link |
00:42:22.600
stick with things that annoy a lot of people and advertising currently annoys people more
link |
00:42:26.480
than it provides information.
link |
00:42:28.480
And I think that a Google probably is smart enough to figure out that this is a dead,
link |
00:42:32.200
this is a bad model, even though it's a hard, huge amount of money and they'll have to figure
link |
00:42:35.760
out how to pull it away from it slowly.
link |
00:42:38.080
And I'm sure the CEO there will figure it out, but they need to do it.
link |
00:42:42.280
And they needed it to, so if you reduce advertising, not to zero, but you reduce it at the same
link |
00:42:47.120
time you bring up producer, consumer, actual real value being delivered.
link |
00:42:51.640
So real money is being paid and they take a 5% cut that 5% could start to get big enough
link |
00:42:56.260
to cancel out the lost revenue from the kind of the poor kind of advertising.
link |
00:43:00.080
And I think that a good company will do that, will realize that.
link |
00:43:04.740
And Facebook, you know, again, God bless them.
link |
00:43:08.440
They bring, you know, grandmothers, they bring children's pictures into grandmothers lives.
link |
00:43:14.680
It's fantastic.
link |
00:43:17.340
But they need to think of a new business model and that's the core problem there.
link |
00:43:22.440
Until they start to connect producer consumer, I think they will just continue to make money
link |
00:43:26.440
and then buy the next social network company and then buy the next one and the innovation
link |
00:43:30.560
level will not be high and the health issues will not go away.
link |
00:43:34.880
So I apologize that we kind of returned to words, I don't think the exact terms matter,
link |
00:43:41.120
but in sort of defense of advertisement, don't you think the kind of direct connection between
link |
00:43:49.440
consumer and creator producer is what advertisement strives to do, right?
link |
00:44:00.960
So that is best advertisement is literally now Facebook is listening to our conversation
link |
00:44:06.680
and heard that you're going to India and will be able to actually start automatically for
link |
00:44:11.400
you making these connections and start giving this offer.
link |
00:44:14.500
So like, I apologize if it's just a matter of terms, but just to draw a distinction,
link |
00:44:19.800
is it possible to make advertisements just better and better and better algorithmically
link |
00:44:23.000
to where it actually becomes a connection, almost a direct connection?
link |
00:44:26.040
That's a good question.
link |
00:44:27.040
So let's component on that.
link |
00:44:28.040
First of all, what we just talked about, I was defending advertising.
link |
00:44:32.000
Okay.
link |
00:44:33.000
So I was defending it as a way to get signals into a market that don't come any other way,
link |
00:44:36.400
especially algorithmically.
link |
00:44:37.720
It's a sign that someone spent money on it, it's a sign they think it's valuable.
link |
00:44:41.640
And if I think that if other things, someone else thinks it's valuable, and if I trust
link |
00:44:45.020
other people, I might be willing to listen.
link |
00:44:47.360
I don't trust that Facebook though, who's an intermediary between this.
link |
00:44:51.840
I don't think they care about me.
link |
00:44:54.600
Okay.
link |
00:44:55.600
I don't think they do.
link |
00:44:56.720
And I find it creepy that they know I'm going to India next week because of our conversation.
link |
00:45:00.880
Why do you think that is?
link |
00:45:02.360
So what, could you just put your PR hat on?
link |
00:45:07.120
Why do you think you find Facebook creepy and not trust them as do majority of the population?
link |
00:45:14.180
So they're out of the Silicon Valley companies, I saw like not approval rate, but there's
link |
00:45:19.360
ranking of how much people trust companies and Facebook is in the gutter.
link |
00:45:23.080
In the gutter, including people inside of Facebook.
link |
00:45:25.600
So what do you attribute that to?
link |
00:45:28.000
Because when I...
link |
00:45:29.000
Come on, you don't find it creepy that right now we're talking that I might walk out on
link |
00:45:31.840
the street right now that some unknown person who I don't know kind of comes up to me and
link |
00:45:35.880
says, I hear you're going to India.
link |
00:45:37.500
I mean, that's not even Facebook.
link |
00:45:38.900
That's just, I want transparency in human society.
link |
00:45:42.560
I want to have, if you know something about me, there's actually some reason you know
link |
00:45:45.680
something about me.
link |
00:45:47.080
That's something that if I look at it later and audit it kind of, I approve.
link |
00:45:51.560
You know something about me because you care in some way.
link |
00:45:54.580
There's a caring relationship even, or an economic one or something.
link |
00:45:58.240
Not just that you're someone who could exploit it in ways I don't know about or care about
link |
00:46:02.000
or I'm troubled by or whatever.
link |
00:46:05.240
We're in a world right now where that happens way too much and that Facebook knows things
link |
00:46:09.880
about a lot of people and could exploit it and does exploit it at times.
link |
00:46:14.720
I think most people do find that creepy.
link |
00:46:16.880
It's not for them.
link |
00:46:17.880
It's not that Facebook is not doing it because they care about them in a real sense.
link |
00:46:23.440
And they shouldn't.
link |
00:46:24.440
They should not be a big brother caring about us.
link |
00:46:26.740
That is not the role of a company like that.
link |
00:46:28.560
Why not?
link |
00:46:29.560
Wait, not the big brother part, but the caring, the trusting.
link |
00:46:32.160
I mean, don't those companies, just to link on it because a lot of companies have a lot
link |
00:46:37.120
of information about us.
link |
00:46:38.320
I would argue that there's companies like Microsoft that has more information about
link |
00:46:42.560
us than Facebook does and yet we trust Microsoft more.
link |
00:46:46.000
Well, Microsoft is pivoting.
link |
00:46:47.480
Microsoft, you know, under Satya Nadella has decided this is really important.
link |
00:46:51.360
We don't want to do creepy things.
link |
00:46:53.320
Really want people to trust us to actually only use information in ways that they really
link |
00:46:56.720
would approve of, that we don't decide, right?
link |
00:47:00.360
And I'm just kind of adding that the health of a market is that when I connect to someone
link |
00:47:06.640
who produces a consumer, it's not just a random producer or consumer, it's people who see
link |
00:47:10.160
each other.
link |
00:47:11.160
They don't like each other, but they sense that if they transact, some happiness will
link |
00:47:14.360
go up on both sides.
link |
00:47:15.940
If a company helps me to do that in moments that I choose of my choosing, then fine.
link |
00:47:22.800
So, and also think about the difference between, you know, browsing versus buying, right?
link |
00:47:28.560
There are moments in my life I just want to buy, you know, a gadget or something.
link |
00:47:31.760
I need something for that moment.
link |
00:47:33.080
I need some ammonia for my house or something because I got a problem with a spill.
link |
00:47:37.400
I want to just go in.
link |
00:47:38.400
I don't want to be advertised at that moment.
link |
00:47:40.080
I don't want to be led down various, you know, that's annoying.
link |
00:47:43.040
I want to just go and have it be extremely easy to do what I want.
link |
00:47:49.020
Other moments I might say, no, it's like today I'm going to the shopping mall.
link |
00:47:52.440
I want to walk around and see things and see people and be exposed to stuff.
link |
00:47:55.560
So I want control over that though.
link |
00:47:56.800
I don't want the company's algorithms to decide for me, right?
link |
00:48:00.200
I think that's the thing.
link |
00:48:01.200
There's a total loss of control if Facebook thinks they should take the control from us
link |
00:48:04.880
of deciding when we want to have certain kinds of information, when we don't, what information
link |
00:48:08.200
that is, how much it relates to what they know about us that we didn't really want them
link |
00:48:11.880
to know about us.
link |
00:48:13.680
I don't want them to be helping me in that way.
link |
00:48:15.840
I don't want them to be helping them by they decide they have control over what I want
link |
00:48:21.640
and when.
link |
00:48:22.640
I totally agree.
link |
00:48:23.640
Facebook, by the way, I have this optimistic thing where I think Facebook has the kind
link |
00:48:28.560
of personal information about us that could create a beautiful thing.
link |
00:48:32.480
So I'm really optimistic of what Facebook could do.
link |
00:48:36.200
It's not what it's doing, but what it could do.
link |
00:48:38.680
So I don't see that.
link |
00:48:39.840
I think that optimism is misplaced because there's not a bit, you have to have a business
link |
00:48:43.400
model behind these things.
link |
00:48:44.400
Create a beautiful thing is really, let's be, let's be clear.
link |
00:48:48.480
It's about something that people would value.
link |
00:48:51.400
And I don't think they have that business model and I don't think they will suddenly
link |
00:48:55.080
discover it by what, you know, a long hot shower.
link |
00:48:58.920
I disagree.
link |
00:48:59.920
I disagree in terms of, you can discover a lot of amazing things in a shower.
link |
00:49:04.840
So I didn't say that.
link |
00:49:05.840
I said, they won't come, they won't do it, but in the shower, I think a lot of other
link |
00:49:10.240
people will discover it.
link |
00:49:11.300
I think that this guy, so I should also, full disclosure, there's a company called United
link |
00:49:15.240
Masters, which I'm on their board and they've created this music market and I have a hundred
link |
00:49:18.760
thousand artists now signed on and they've done things like gone to the NBA and the NBA,
link |
00:49:23.220
the music you find behind NBA clips right now is their music, right?
link |
00:49:26.960
That's a company that had the right business model in mind from the get go, right?
link |
00:49:31.920
Executed on that.
link |
00:49:32.920
And from day one, there was value brought to, so here you have a kid who made some songs
link |
00:49:37.220
who suddenly their songs are on the NBA website, right?
link |
00:49:41.260
That's real economic value to people.
link |
00:49:43.440
And so, you know, so you and I differ on the optimism of being able to sort of change the
link |
00:49:51.800
direction of the Titanic, right?
link |
00:49:54.440
So I, yeah, I'm older than you, so I've seen some Titanic's crash, got it.
link |
00:50:01.120
But and just to elaborate, cause I totally agree with you and I just want to know how
link |
00:50:05.560
difficult you think this problem is of, so for example, I want to read some news and
link |
00:50:11.880
I would, there's a lot of times in the day where something makes me either smile or think
link |
00:50:16.940
in a way where I like consciously think this really gave me value.
link |
00:50:20.800
Like I sometimes listen to the daily podcasts in the New York times, way better than the
link |
00:50:26.480
New York times themselves, by the way, for people listening.
link |
00:50:29.320
That's like real journalism is happening for some reason in the podcast space.
link |
00:50:32.560
It doesn't make sense to me, but often I listen to it 20 minutes and I would be willing to
link |
00:50:37.600
pay for that, like $5, $10 for that experience.
link |
00:50:41.860
And how difficult, that's kind of what you're getting at is that little transaction.
link |
00:50:48.200
How difficult is it to create a frictionless system like Uber has, for example, for other
link |
00:50:52.640
things?
link |
00:50:53.640
What's your intuition there?
link |
00:50:55.280
So I, first of all, I pay little bits of money to, you know, to send, there's something
link |
00:50:58.500
called courts that does financial things.
link |
00:51:00.300
I like medium as a site, I don't pay there, but I would.
link |
00:51:04.480
You had a great post on medium.
link |
00:51:06.280
I would have loved to pay you a dollar and not others.
link |
00:51:10.280
I wouldn't have wanted it per se because there should be also sites where that's not actually
link |
00:51:15.560
the goal.
link |
00:51:16.560
The goal is to actually have a broadcast channel that I monetize in some other way if I chose
link |
00:51:20.240
to.
link |
00:51:21.240
I mean, I could now people know about it.
link |
00:51:23.080
I could, I'm not doing it, but that's fine with me.
link |
00:51:26.360
Also the musicians who are making all this music, I don't think the right model is that
link |
00:51:29.840
you pay a little subscription fee to them, right?
link |
00:51:32.880
Because people can copy the bits too easily and it's just not that somewhere the value
link |
00:51:35.860
is.
link |
00:51:36.860
The value is that a connection was made between real human beings, then you can follow up
link |
00:51:39.800
on that.
link |
00:51:40.800
All right.
link |
00:51:41.800
And create yet more value.
link |
00:51:42.960
So no, I think there's a lot of open questions here, hot open questions, but also, yeah,
link |
00:51:47.920
I do want good recommendation systems that recommend cool stuff to me.
link |
00:51:51.360
But it's pretty hard, right?
link |
00:51:52.360
I don't like them to recommend stuff just based on my browsing history.
link |
00:51:55.880
I don't like the based on stuff they know about me, quote unquote.
link |
00:51:59.000
What's unknown about me is the most interesting.
link |
00:52:00.860
So this is the, this is the really interesting question.
link |
00:52:03.640
We may disagree, maybe not.
link |
00:52:05.860
I think that I love recommender systems and I want to give them everything about me in
link |
00:52:12.160
a way that I trust.
link |
00:52:13.160
Yeah.
link |
00:52:14.160
But you, but you don't, because, so for example, this morning I clicked on a, you know, I was
link |
00:52:17.880
pretty sleepy this morning.
link |
00:52:19.960
I clicked on a story about the queen of England.
link |
00:52:23.280
Yes.
link |
00:52:24.280
Right.
link |
00:52:25.280
I do not give a damn about the queen of England.
link |
00:52:26.440
I really do not.
link |
00:52:27.560
But it was clickbait.
link |
00:52:28.560
It kind of looked funny and I had to say, what the heck are they talking about?
link |
00:52:31.520
I don't want to have my life, you know, heading that direction.
link |
00:52:34.040
Now that's in my browsing history.
link |
00:52:36.180
The system in any reasonable system will think that I care about the queen of England.
link |
00:52:39.880
That's browsing history.
link |
00:52:40.880
Right.
link |
00:52:41.880
But, but you're saying all the trace, all the digital exhaust or whatever, that's been
link |
00:52:44.640
kind of the models.
link |
00:52:45.640
If you collect all this stuff, you're going to figure all of us out.
link |
00:52:48.560
Well, if you're trying to figure out like kind of one person like Trump or something,
link |
00:52:51.280
maybe you could figure him out.
link |
00:52:52.280
But if you're trying to figure out, you know, 500 million people, you know, no way, no way.
link |
00:52:58.040
You think so?
link |
00:52:59.040
No, I do.
link |
00:53:00.040
I think so.
link |
00:53:01.040
I think we are, humans are just amazingly rich and complicated.
link |
00:53:02.560
Every one of us has our little quirks, every one of us has our little things that could
link |
00:53:05.220
intrigue us that we don't even know it will intrigue us.
link |
00:53:08.020
And there's no sign of it in our past, but by God, there it comes and you know, you fall
link |
00:53:12.240
in love with it.
link |
00:53:13.240
And I don't want a company trying to figure that out for me and anticipate that I want
link |
00:53:16.520
them to provide a forum, a market, a place that I kind of go and by hook or by crook,
link |
00:53:22.160
this happens, you know, I I'm walking down the street and I hear some Chilean music being
link |
00:53:26.120
played and I never knew I liked Chilean music, but wow.
link |
00:53:28.580
So there is that side and I want them to provide a limited, but you know, interesting place
link |
00:53:33.680
to go.
link |
00:53:34.680
Right.
link |
00:53:35.680
And so don't try to use your AI to kind of, you know, figure me out and then put me in
link |
00:53:39.740
a world where you figured me out, you know, no, create huge spaces for human beings where
link |
00:53:45.140
our creativity and our style will be enriched and come forward and it'll be a lot of more
link |
00:53:50.360
transparency.
link |
00:53:51.360
I won't have people randomly, anonymously putting comments up and I'll special based
link |
00:53:55.400
on stuff they know about me, facts that, you know, we are so broken right now.
link |
00:54:00.080
If you're, you know, especially if you're a celebrity, but you know, it's about anybody
link |
00:54:02.920
that anonymous people are hurting lots and lots of people right now.
link |
00:54:06.720
That's part of this thing that Silicon Valley is thinking that, you know, just collect all
link |
00:54:10.200
this information and use it in a great way.
link |
00:54:12.480
So no, I'm not, I'm not a pessimist, I'm very much an optimist by nature, but I think that's
link |
00:54:16.420
just been the wrong path for the whole technology to take.
link |
00:54:19.920
Be more limited, create, let humans rise up.
link |
00:54:24.040
Don't try to replace them.
link |
00:54:25.740
That's the AI mantra.
link |
00:54:26.760
Don't try to anticipate them.
link |
00:54:28.660
Don't try to predict them because you're, you're, you're not going to, you're not going
link |
00:54:32.320
to be able to do those things.
link |
00:54:33.320
You're going to make things worse.
link |
00:54:34.320
Okay.
link |
00:54:35.320
So right now, just give this a chance.
link |
00:54:38.760
Right now, the recommender systems are the creepy people in the shadow watching your
link |
00:54:43.840
every move.
link |
00:54:45.500
So they're looking at traces of you.
link |
00:54:47.800
They're not directly interacting with you, sort of the, your close friends and family,
link |
00:54:53.000
the way they know you is by having conversation, by actually having interactions back and forth.
link |
00:54:57.120
Do you think there's a place for recommender systems sort of to step, cause you, you just
link |
00:55:02.360
emphasize the value of human to human connection, but yeah, just give it a chance, AI human
link |
00:55:06.740
connection.
link |
00:55:07.840
Is there a role for an AI system to have conversations with you in terms of, to try to figure out
link |
00:55:13.560
what kind of music you like, not by just watching what you listening to, but actually having
link |
00:55:17.360
a conversation, natural language or otherwise.
link |
00:55:19.560
Yeah, no, I'm, I'm, so I'm not against it.
link |
00:55:21.760
I just wanted to push back against the, maybe you're saying you have options for Facebook.
link |
00:55:25.120
So there I think it's misplaced, but, but I think that distributing, yeah, no, so good
link |
00:55:31.760
for you.
link |
00:55:33.520
Go for it.
link |
00:55:34.520
That's a hard spot to be in.
link |
00:55:35.520
Yeah, no, good.
link |
00:55:36.520
Human interaction, like on our daily, the context around me in my own home is something
link |
00:55:39.520
that I don't want some big company to know about at all, but I would be more than happy
link |
00:55:42.280
to have technology help me with it.
link |
00:55:44.200
Which kind of technology?
link |
00:55:45.200
Well, you know, just, Alexa, Amazon, well, a good, Alexa's done right.
link |
00:55:49.200
And I think Alexa is a research platform right now more than anything else.
link |
00:55:52.160
But Alexa done right, you know, could do things like I, I leave the water running in my garden
link |
00:55:56.480
and I say, Hey, Alexa, the water's running in my garden.
link |
00:55:59.200
And even have Alexa figure out that that means when my wife comes home, that she should be
link |
00:56:02.040
told about that.
link |
00:56:03.600
That's a little bit of a reasoning.
link |
00:56:04.600
I would call that AI and by any kind of stretch, it's a little bit of reasoning and it actually
link |
00:56:08.860
kind of would make my life a little easier and better.
link |
00:56:11.000
And you know, I don't, I wouldn't call this a wow moment, but I kind of think that overall
link |
00:56:14.600
rises human happiness up to have that kind of thing.
link |
00:56:18.320
But not when you're lonely, Alexa, knowing loneliness.
link |
00:56:20.840
No, no, I don't want Alexa to be, feel intrusive.
link |
00:56:25.600
And I don't want just the designer of the system to kind of work all this out.
link |
00:56:28.440
I really want to have a lot of control and I want transparency and control.
link |
00:56:32.440
And if a company can stand up and give me that in the context of new technology, I think
link |
00:56:36.800
they're good.
link |
00:56:37.800
First of all, be way more successful than our current generation.
link |
00:56:39.280
And like I said, I was mentioning Microsoft, I really think they're, they're pivoting to
link |
00:56:43.300
kind of be the trusted old uncle, but you know, I think that they get that this is a
link |
00:56:47.000
way to go, that if you let people find technology, empowers them to have more control and have
link |
00:56:51.600
and have control, not just over privacy, but over this rich set of interactions, that that
link |
00:56:56.720
people are going to like that a lot more.
link |
00:56:58.120
And that's, that's the right business model going forward.
link |
00:57:00.560
What does control over privacy look like?
link |
00:57:02.240
Do you think you should be able to just view all the data that?
link |
00:57:04.760
No, it's much more than that.
link |
00:57:05.920
I mean, first of all, it should be an individual decision.
link |
00:57:07.900
Some people don't want privacy.
link |
00:57:09.220
They want their whole life out there.
link |
00:57:10.720
Other people's want it.
link |
00:57:13.720
Privacy is not a zero one.
link |
00:57:16.020
It's not a legal thing.
link |
00:57:17.020
It's not just about which data is available, which is not.
link |
00:57:20.280
I like to recall to people that, you know, a couple hundred years ago, everyone, there
link |
00:57:24.880
was not really big cities, everyone lived in on the countryside and villages and villages.
link |
00:57:29.640
Everybody knew everything about you.
link |
00:57:30.640
Very, you didn't have any privacy.
link |
00:57:32.720
Is that bad?
link |
00:57:33.720
Are we better off now?
link |
00:57:34.720
Well, you know, arguably no, because what did you get for that loss of certain kinds
link |
00:57:39.040
of privacy?
link |
00:57:40.520
Well, people help each other if they, because they know everything about you.
link |
00:57:44.080
They know something's bad's happening, they will help you with that.
link |
00:57:46.400
Right.
link |
00:57:47.400
And now you live in a big city, no one knows about that.
link |
00:57:48.400
You get no help.
link |
00:57:50.840
So it kind of depends the answer.
link |
00:57:52.680
I want certain people who I trust and there should be relationships.
link |
00:57:56.320
I should kind of manage all those, but who knows what about me?
link |
00:57:59.000
I should have some agency there.
link |
00:58:00.800
It shouldn't, I shouldn't be a drift in a sea of technology where I have no agency.
link |
00:58:04.680
I don't want to go reading things and checking boxes.
link |
00:58:08.560
So I don't know how to do that.
link |
00:58:09.960
And I'm not a privacy researcher per se.
link |
00:58:11.480
I just, I recognize the vast complexity of this.
link |
00:58:14.360
It's not just technology.
link |
00:58:15.360
It's not just legal scholars meeting technologists.
link |
00:58:18.920
There's gotta be kind of a whole layers around it.
link |
00:58:20.900
And so I, when I alluded to this emerging engineering field, this is a big part of it.
link |
00:58:26.480
When electrical engineering came, I'm not one around at the time, but you just didn't
link |
00:58:31.320
plug electricity into walls and all kinds of work.
link |
00:58:34.120
You don't have to have like underwriters laboratory that reassured you that that plug's not going
link |
00:58:37.840
to burn up your house and that that machine will do this and that and everything.
link |
00:58:41.720
There'll be whole people who can install things.
link |
00:58:44.520
There'll be people who can watch the installers.
link |
00:58:46.360
There'll be a whole layers, you know, an onion of these kinds of things.
link |
00:58:49.960
And for things as deep and interesting as privacy, which is as least as interesting
link |
00:58:53.960
as electricity, that's going to take decades to kind of work out, but it's going to require
link |
00:58:58.120
a lot of new structures that we don't have right now.
link |
00:59:00.320
So it's kind of hard to talk about it.
link |
00:59:02.320
And you're saying there's a lot of money to be made if you get it right.
link |
00:59:04.840
So something you should look at.
link |
00:59:05.840
A lot of money to be made in all these things that provide human services and people recognize
link |
00:59:09.560
them as useful parts of their lives.
link |
00:59:12.360
So yeah.
link |
00:59:14.280
So yeah, the dialogue sometimes goes from the exuberant technologists to the no technology
link |
00:59:19.660
is good, kind of.
link |
00:59:20.800
And that's, you know, in our public discourse, you know, and as far as you see too much of
link |
00:59:24.480
this kind of thing and the sober discussions in the middle, which are the challenge he
link |
00:59:28.400
wants to have or where we need to be having our conversations.
link |
00:59:31.560
And you know, there's just not actually, there's not many forum fora for those.
link |
00:59:36.480
You know, there's, that's, that's kind of what I would look for.
link |
00:59:39.180
Maybe I could go and I could read a comment section of something and it would actually
link |
00:59:42.040
be this kind of dialogue going back and forth.
link |
00:59:44.520
You don't see much of this, right?
link |
00:59:45.800
Which is why actually there's a resurgence of podcasts out of all, because people are
link |
00:59:49.800
really hungry for conversation, but there's technology is not helping much.
link |
00:59:55.760
So comment sections of anything, including YouTube is not hurting and not helping.
link |
01:00:01.520
Yeah.
link |
01:00:02.520
And you think technically speaking, it's possible to help.
link |
01:00:07.800
I don't know the answers, but it's a, it's a, it's a less anonymity, a little more locality,
link |
01:00:13.840
you know, worlds that you kind of enter in and you trust the people there in those worlds
link |
01:00:17.340
so that when you start having a discussion, you know, not only is that people are not
link |
01:00:20.040
going to hurt you, but it's not going to be a total waste of your time because there's
link |
01:00:23.120
a lot of wasting of time that, you know, a lot of us, I pulled out of Facebook early
link |
01:00:26.640
on cause it was clearly going to waste a lot of my time even though there was some value.
link |
01:00:31.360
And so, yeah, worlds that are somehow you enter in and you know what you're getting
link |
01:00:34.600
and it's kind of appeals to you and you might, new things might happen, but you kind of have
link |
01:00:38.400
some, some trust in that world.
link |
01:00:40.820
And there's some deep, interesting, complex psychological aspects around anonymity, how
link |
01:00:46.520
that changes human behavior that's quite dark.
link |
01:00:49.960
Quite dark.
link |
01:00:50.960
Yeah.
link |
01:00:51.960
I think a lot of us are, especially those of us who really loved the advent of technology.
link |
01:00:55.440
I love social networks when they came out.
link |
01:00:56.760
I was just, I didn't see any negatives there at all.
link |
01:00:59.520
But then I started seeing comment sections.
link |
01:01:01.720
I think it was maybe, you know, with the CNN or something.
link |
01:01:04.760
And I started to go, wow, this, this darkness I just did not know about and, and our technology
link |
01:01:10.040
is now amplifying it.
link |
01:01:11.520
So sorry for the big philosophical question, but on that topic, do you think human beings,
link |
01:01:15.960
cause you've also, out of all things, had a foot in psychology too, the, do you think
link |
01:01:21.120
human beings are fundamentally good?
link |
01:01:23.800
Like all of us have good intent that could be mind or is it depending on context and
link |
01:01:32.240
environment, everybody could be evil.
link |
01:01:34.960
So my answer is fundamentally good.
link |
01:01:37.720
But fundamentally limited.
link |
01:01:39.240
All of us have very, you know, blinkers on.
link |
01:01:41.320
We don't see the other person's pain that easily.
link |
01:01:43.940
We don't see the other person's point of view that easily.
link |
01:01:46.680
We're very much in our own head, in our own world.
link |
01:01:49.880
And on my good days, I think the technology could open us up to, you know, more perspectives
link |
01:01:53.920
and more less blinkered and more understanding, you know, a lot of wars in human history happened
link |
01:01:58.560
because of just ignorance.
link |
01:01:59.560
They didn't, they, they thought the other person was doing this while their person wasn't
link |
01:02:02.600
doing this.
link |
01:02:03.600
And we have a huge amounts of that.
link |
01:02:05.440
But in my lifetime, I've not seen technology really help in that way yet.
link |
01:02:09.200
And I do, I do, I do believe in that, but you know, no, I think fundamentally humans
link |
01:02:13.600
are good.
link |
01:02:14.600
The people suffer, people have grievances because you have grudges and those things
link |
01:02:17.440
cause them to do things they probably wouldn't want.
link |
01:02:20.000
They regret it often.
link |
01:02:22.640
So no, I, I think it's a, you know, part of the progress of technology is to indeed allow
link |
01:02:28.080
it to be a little easier to be the real good person you actually are.
link |
01:02:31.160
Well, but do you think individual human life or society could be modeled as an optimization
link |
01:02:39.880
problem?
link |
01:02:40.880
Not the way I think typically, I mean, that's, you're talking about one of the most complex
link |
01:02:45.080
phenomenon in the whole, you know, in all of which the individual human life or society
link |
01:02:49.600
as a whole.
link |
01:02:50.600
Both, both.
link |
01:02:51.600
I mean, individual human life is amazingly complex.
link |
01:02:54.440
And so you know, optimization is kind of just one branch of mathematics that talks about
link |
01:02:58.960
certain kinds of things.
link |
01:02:59.960
And it just feels way too limited for the complexity of such things.
link |
01:03:04.520
What properties of optimization problems do you think, so do you think most interesting
link |
01:03:09.440
problems that could be solved through optimization, what kind of properties does that surface
link |
01:03:13.860
have non convexity, convexity, linearity, all those kinds of things, saddle points?
link |
01:03:19.680
Well, so optimization is just one piece of mathematics.
link |
01:03:22.160
You know, there's like, you just, even in our era, we're aware that say sampling is
link |
01:03:27.480
coming up, examples of something coming up with a distribution.
link |
01:03:31.520
What's optimization?
link |
01:03:32.520
What's sampling?
link |
01:03:33.520
Well, they, you can, if you're a kind of a certain kind of mathematician, you can try
link |
01:03:35.920
to blend them and make them seem to be sort of the same thing.
link |
01:03:38.680
But optimization is roughly speaking, trying to find a point that, a single point that
link |
01:03:44.160
is the optimum of a criterion function of some kind.
link |
01:03:48.740
And sampling is trying to, from that same surface, treat that as a distribution or density
link |
01:03:53.940
and find points that have high density.
link |
01:03:56.920
So I want the entire distribution in a sampling paradigm and I want the, you know, the single
link |
01:04:03.480
point, that's the best point in the optimization paradigm.
link |
01:04:07.640
Now if you were optimizing in the space of probability measures, the output of that could
link |
01:04:11.880
be a whole probability distribution.
link |
01:04:13.080
So you can start to make these things the same.
link |
01:04:15.560
But in mathematics, if you go too high up that kind of abstraction hierarchy, you start
link |
01:04:18.400
to lose the, you know, the ability to do the interesting theorems.
link |
01:04:22.900
So you kind of don't try that.
link |
01:04:23.900
You don't try to overly over abstract.
link |
01:04:26.960
So as a small tangent, what kind of worldview do you find more appealing?
link |
01:04:31.540
One that is deterministic or stochastic?
link |
01:04:35.080
Well, that's easy.
link |
01:04:36.880
I mean, I'm a statistician.
link |
01:04:38.160
You know, the world is highly stochastic.
link |
01:04:40.400
I don't know what's going to happen in the next five minutes, right?
link |
01:04:42.360
Because what you're going to ask, what we're going to do, what I'll say.
link |
01:04:44.360
Due to the uncertainty.
link |
01:04:45.360
Due to the...
link |
01:04:46.360
Massive uncertainty.
link |
01:04:47.360
Yeah.
link |
01:04:48.360
You know, massive uncertainty.
link |
01:04:49.360
And so the best I can do is have come rough sense or probability distribution on things
link |
01:04:53.080
and somehow use that in my reasoning about what to do now.
link |
01:04:58.280
So how does the distributed at scale when you have multi agent systems look like?
link |
01:05:07.080
So optimization can optimize sort of, it makes a lot more sense, sort of at least from my
link |
01:05:13.760
from robotics perspective, for a single robot, for a single agent, trying to optimize some
link |
01:05:18.240
objective function.
link |
01:05:21.040
When you start to enter the real world, this game theoretic concept starts popping up.
link |
01:05:27.080
That's how do you see optimization in this?
link |
01:05:30.400
Because you've talked about markets in a scale.
link |
01:05:32.720
What does that look like?
link |
01:05:33.720
Do you see it as optimization?
link |
01:05:34.720
Do you see it as sampling?
link |
01:05:36.120
Do you see like, how should you mark?
link |
01:05:38.280
These all blend together.
link |
01:05:39.280
And a system designer thinking about how to build an incentivized system will have a blend
link |
01:05:44.120
of all these things.
link |
01:05:45.120
So, you know, a particle in a potential well is optimizing a functional called a Lagrangian,
link |
01:05:49.800
right?
link |
01:05:50.800
The particle doesn't know that.
link |
01:05:51.800
There's no algorithm running that does that.
link |
01:05:54.640
It just happens.
link |
01:05:55.640
And so it's a description mathematically of something that helps us understand as analysts
link |
01:05:59.160
what's happening, right?
link |
01:06:00.840
And so the same thing will happen when we talk about, you know, mixtures of humans and
link |
01:06:03.520
computers and markets and so on and so forth, there'll be certain principles that allow
link |
01:06:07.080
us to understand what's happening, whether or not the actual algorithms are being used
link |
01:06:10.320
by any sense is not clear.
link |
01:06:13.000
Now at some point, I may have set up a multi agent or market kind of system.
link |
01:06:19.080
And I'm now thinking about an individual agent in that system.
link |
01:06:22.440
And they're asked to do some task and they're incentivized in some way, they get certain
link |
01:06:25.200
signals and they have some utility.
link |
01:06:28.160
What they will do at that point is they just won't know the answer, they may have to optimize
link |
01:06:31.560
to find an answer.
link |
01:06:32.560
Okay, so an artist could be embedded inside of an overall market.
link |
01:06:36.920
You know, and game theory is very, very broad.
link |
01:06:39.880
It is often studied very narrowly for certain kinds of problems.
link |
01:06:44.020
But it's roughly speaking, this is just the, I don't know what you're going to do.
link |
01:06:47.920
So I kind of anticipate that a little bit, and you anticipate what I'm anticipating.
link |
01:06:51.560
And we kind of go back and forth in our own minds.
link |
01:06:53.360
We run kind of thought experiments.
link |
01:06:55.480
You've talked about this interesting point in terms of game theory, you know, most optimization
link |
01:07:00.420
problems really hate saddle points, maybe you can describe what saddle points are.
link |
01:07:04.840
But I've heard you kind of mentioned that there's a there's a branch of optimization
link |
01:07:09.560
that you could try to explicitly look for saddle points as a good thing.
link |
01:07:14.720
Oh, not optimization.
link |
01:07:15.840
That's just game theory that that so there's all kinds of different equilibria in game
link |
01:07:19.740
theory.
link |
01:07:20.740
And some of them are highly explanatory behavior.
link |
01:07:23.220
They're not attempting to be algorithmic.
link |
01:07:24.760
They're just trying to say, if you happen to be at this equilibrium, you would see certain
link |
01:07:29.080
kind of behavior.
link |
01:07:30.080
And we see that in real life.
link |
01:07:31.080
That's what an economist wants to do, especially behavioral economists in continuous differential
link |
01:07:39.420
game theory, you're in continuous spaces, a some of the simplest equilibria are saddle
link |
01:07:44.020
points and Nash equilibrium as a saddle point.
link |
01:07:46.400
It's a special kind of saddle point.
link |
01:07:48.440
So classically, in game theory, you were trying to find Nash equilibria and an algorithmic
link |
01:07:53.560
game theory, you're trying to find algorithms that would find them.
link |
01:07:56.400
And so you're trying to find saddle points.
link |
01:07:57.760
I mean, so that's literally what you're trying to do.
link |
01:08:00.720
But you know, any economist knows that Nash equilibria have their limitations.
link |
01:08:04.160
They are definitely not that explanatory in many situations.
link |
01:08:08.200
They're not what you really want.
link |
01:08:10.360
There's other kind of equilibria.
link |
01:08:12.180
And there's names associated with these because they came from history with certain people
link |
01:08:15.440
working on them, but there will be new ones emerging.
link |
01:08:18.080
So you know, one example is a Stackelberg equilibrium.
link |
01:08:21.200
So you know, Nash, you and I are both playing this game against each other or for each other,
link |
01:08:25.800
maybe it's cooperative, and we're both going to think it through and then we're going to
link |
01:08:29.000
decide and we're going to do our thing simultaneously.
link |
01:08:32.520
You know, in a Stackelberg, no, I'm going to be the first mover.
link |
01:08:34.640
I'm going to make a move.
link |
01:08:35.880
You're going to look at my move and then you're going to make yours.
link |
01:08:38.400
Now since I know you're going to look at my move, I anticipate what you're going to do.
link |
01:08:42.180
And so I don't do something stupid, but then I know that you are also anticipating me.
link |
01:08:46.960
So we're kind of going back and forth on why, but there is then a first mover thing.
link |
01:08:51.800
And so those are different equilibria, right?
link |
01:08:54.920
And so just mathematically, yeah, these things have certain topologies and certain shapes
link |
01:08:59.220
that are like, what's it, algorithmically or dynamically, how do you move towards them?
link |
01:09:02.840
How do you move away from things?
link |
01:09:05.820
You know, so some of these questions have answers, they've been studied, others do not.
link |
01:09:09.500
And especially if it becomes stochastic, especially if there's large numbers of decentralized
link |
01:09:13.920
things, there's just, you know, young people get in this field who kind of think it's all
link |
01:09:17.440
done because we have, you know, TensorFlow.
link |
01:09:19.520
Well, no, these are all open problems and they're really important and interesting.
link |
01:09:23.680
And it's about strategic settings.
link |
01:09:25.140
How do I collect data?
link |
01:09:26.640
Suppose I don't know what you're going to do because I don't know you very well, right?
link |
01:09:29.280
Well, I got to collect data about you.
link |
01:09:31.180
So maybe I want to push you into a part of the space where I don't know much about you
link |
01:09:34.280
so I can get data.
link |
01:09:35.280
Cause, and then later I'll realize that you'll never, you'll never go there because of the
link |
01:09:38.960
way the game is set up.
link |
01:09:39.960
You know, that's part of the overall, you know, data analysis context is that.
link |
01:09:44.080
Even the game of poker is fascinating space, whenever there's any uncertainty, a lack of
link |
01:09:47.840
information, it's a super exciting space.
link |
01:09:52.560
Just to linger on optimization for a second.
link |
01:09:55.360
So when we look at deep learning, it's essentially minimization of a complicated loss function.
link |
01:10:01.600
So is there something insightful or hopeful that you see in the kinds of function surface
link |
01:10:07.400
that loss functions, the deep learning and in the real world is trying to optimize over?
link |
01:10:13.800
Is there something interesting as it's just the usual kind of problems of optimization?
link |
01:10:20.040
I think from an optimization point of view, that surface, first of all, it's pretty smooth.
link |
01:10:25.600
And secondly, if there's over, if it's over parameterized, there's kind of lots of paths
link |
01:10:29.120
down to reasonable Optima.
link |
01:10:31.540
And so kind of the getting downhill to the, to an optimum is viewed as not as hard as
link |
01:10:35.680
you might've expected in high dimensions.
link |
01:10:39.980
The fact that some Optima tend to be really good ones and others not so good.
link |
01:10:43.200
And you tend to, it's not, sometimes you find the good ones is sort of still needs explanation.
link |
01:10:48.080
Yeah.
link |
01:10:49.080
But, but the particular surface is coming from the particular generation of neural nets.
link |
01:10:53.560
I kind of suspect those will, those will change in 10 years.
link |
01:10:56.880
It will not be exactly those surfaces.
link |
01:10:58.360
There'll be some others that are an optimization theory will help contribute to why other surfaces
link |
01:11:02.500
or why other algorithms.
link |
01:11:05.640
Years of arithmetic operations with a little bit of nonlinearity, that's not, that didn't
link |
01:11:09.840
come from neuroscience per se.
link |
01:11:10.960
I mean, maybe in the minds of some of the people working on it, they were thinking about
link |
01:11:13.920
brains, but they were arithmetic circuits in all kinds of fields, computer science control
link |
01:11:19.040
theory and so on.
link |
01:11:20.640
And that layers of these could transform things in certain ways.
link |
01:11:23.480
And that if it's smooth, maybe you could find parameter values is a sort of big discovery
link |
01:11:32.000
that it's working, it's able to work at this scale.
link |
01:11:35.000
But I don't think that we're stuck with that and we're, we're certainly not stuck with
link |
01:11:39.840
that cause we're understanding the brain.
link |
01:11:42.120
So in terms of on the algorithm side sort of gradient descent, do you think we're stuck
link |
01:11:46.360
with gradient descent as a variance of it?
link |
01:11:49.360
What variance do you find interesting or do you think there'll be something else invented
link |
01:11:53.600
that is able to walk all over these optimization spaces in more interesting ways?
link |
01:11:59.720
So there's a co design of the surface and the, or the architecture and the algorithm.
link |
01:12:04.700
So if you just ask if we stay with the kind of architectures that we have now and not
link |
01:12:08.080
just neural nets, but you know, phase retrieval architectures or matrix completion architectures
link |
01:12:13.080
and so on.
link |
01:12:15.080
You know, I think we've kind of come to a place where yeah, a stochastic gradient algorithms
link |
01:12:19.560
are dominant and there are versions that are a little better than others.
link |
01:12:25.840
They have more guarantees, they're more robust and so on.
link |
01:12:29.160
And there's ongoing research to kind of figure out which is the best arm for which situation.
link |
01:12:34.260
But I think that that'll start to co evolve, that that'll put pressure on the actual architecture.
link |
01:12:37.880
And so we shouldn't do it in this particular way, we should do it in a different way because
link |
01:12:40.800
this other algorithm is now available if you do it in a different way.
link |
01:12:45.340
So that I can't really anticipate that co evolution process, but you know, gradients
link |
01:12:51.600
are amazing mathematical objects.
link |
01:12:54.480
They have a lot of people who start to study them more deeply mathematically are kind of
link |
01:13:01.120
shocked about what they are and what they can do.
link |
01:13:05.160
Think about it this way, suppose that I tell you if you move along the x axis, you go uphill
link |
01:13:11.040
in some objective by three units, whereas if you move along the y axis, you go uphill
link |
01:13:15.920
by seven units, right?
link |
01:13:18.000
Now I'm going to only allow you to move a certain unit distance, right?
link |
01:13:22.440
What are you going to do?
link |
01:13:23.440
Well, most people will say that I'm going to go along the y axis, I'm getting the biggest
link |
01:13:27.240
bang for my buck, you know, and my buck is only one unit, so I'm going to put all of
link |
01:13:31.120
it in the y axis, right?
link |
01:13:33.960
And why should I even take any of my strength, my step size and put any of it in the x axis
link |
01:13:39.320
because I'm getting less bang for my buck.
link |
01:13:41.480
That seems like a completely clear argument and it's wrong because the gradient direction
link |
01:13:47.480
is not to go along the y axis, it's to take a little bit of the x axis.
link |
01:13:51.780
And to understand that, you have to know some math and so even a trivial so called operator
link |
01:13:59.200
like gradient is not trivial and so, you know, exploiting its properties is still very important.
link |
01:14:04.020
Now we know that just pervading descent has got all kinds of problems, it gets stuck in
link |
01:14:06.840
many ways and it had never, you know, good dimension dependence and so on.
link |
01:14:10.960
So my own line of work recently has been about what kinds of stochasticity, how can we get
link |
01:14:15.960
dimension dependence, how can we do the theory of that and we've come up pretty favorable
link |
01:14:20.200
results with certain kinds of stochasticity.
link |
01:14:22.720
We have sufficient conditions generally.
link |
01:14:25.000
We know if you do this, we will give you a good guarantee.
link |
01:14:28.760
We don't have necessary conditions that it must be done a certain way in general.
link |
01:14:32.280
So stochasticity, how much randomness to inject into the walking along the gradient?
link |
01:14:38.200
And what kind of randomness?
link |
01:14:40.000
Why is randomness good in this process?
link |
01:14:42.240
Why is stochasticity good?
link |
01:14:44.240
Yeah, so I can give you simple answers but in some sense again, it's kind of amazing.
link |
01:14:49.320
Stochasticity just, you know, particular features of a surface that could have hurt you if you
link |
01:14:55.600
were doing one thing deterministically won't hurt you because by chance, there's very little
link |
01:15:02.080
chance that you would get hurt.
link |
01:15:04.800
So here stochasticity, it just kind of saves you from some of the particular features of
link |
01:15:12.840
surfaces.
link |
01:15:13.840
In fact, if you think about surfaces that are discontinuous in our first derivative,
link |
01:15:19.400
like an absolute value function, you will go down and hit that point where there's nondifferentiability.
link |
01:15:25.400
And if you're running a deterministic algorithm at that point, you can really do something
link |
01:15:28.520
bad.
link |
01:15:29.520
Whereas stochasticity just means it's pretty unlikely that's going to happen, that you're
link |
01:15:32.960
going to hit that point.
link |
01:15:35.720
So it's again, nontrivial to analyze but especially in higher dimensions, also stochasticity,
link |
01:15:41.860
our intuition isn't very good about it but it has properties that kind of are very appealing
link |
01:15:45.440
in high dimensions for a lot of large number of reasons.
link |
01:15:49.200
So it's all part of the mathematics to kind of, that's what's fun to work in the field
link |
01:15:52.520
is that you get to try to understand this mathematics.
link |
01:15:57.040
But long story short, you know, partly empirically, it was discovered stochastic gradient is very
link |
01:16:01.200
effective and theory kind of followed, I'd say, that but I don't see that we're getting
link |
01:16:06.600
clearly out of that.
link |
01:16:09.120
What's the most beautiful, mysterious, a profound idea to you in optimization?
link |
01:16:15.560
I don't know the most.
link |
01:16:17.360
But let me just say that Nesterov's work on Nesterov acceleration to me is pretty surprising
link |
01:16:23.600
and pretty deep.
link |
01:16:26.280
Can you elaborate?
link |
01:16:27.280
Well Nesterov acceleration is just that, suppose that we are going to use gradients
link |
01:16:32.240
to move around in a space.
link |
01:16:33.240
For the reasons I've alluded to, they're nice directions to move.
link |
01:16:37.280
And suppose that I tell you that you're only allowed to use gradients, you're not going
link |
01:16:40.520
to be allowed to use this local person that can only sense kind of the change in the surface.
link |
01:16:47.440
But I'm going to give you kind of a computer that's able to store all your previous gradients.
link |
01:16:50.920
And so you start to learn some something about the surface.
link |
01:16:55.020
And I'm going to restrict you to maybe move in the direction of like a linear span of
link |
01:16:58.620
all the gradients.
link |
01:16:59.620
So you can't kind of just move in some arbitrary direction, right?
link |
01:17:02.880
So now we have a well defined mathematical complexity model.
link |
01:17:05.720
There's certain classes of algorithms that can do that and others that can't.
link |
01:17:09.320
And we can ask for certain kinds of surfaces, how fast can you get down to the optimum?
link |
01:17:13.800
So there's answers to these.
link |
01:17:14.960
So for a smooth convex function, there's an answer, which is one over the number of steps
link |
01:17:21.040
squared.
link |
01:17:22.400
You will be within a ball of that size after k steps.
link |
01:17:29.120
Gradient descent in particular has a slower rate, it's one over k.
link |
01:17:35.420
So you could ask, is gradient descent actually, even though we know it's a good algorithm,
link |
01:17:38.960
is it the best algorithm?
link |
01:17:39.960
And the answer is no.
link |
01:17:41.960
Well, not clear yet, because one over k squared is a lower bound.
link |
01:17:47.420
That's probably the best you can do.
link |
01:17:49.960
Gradient is one over k, but is there something better?
link |
01:17:52.740
And so I think as a surprise to most, Nesterov discovered a new algorithm that has got two
link |
01:17:59.280
pieces to it.
link |
01:18:00.280
It's two gradients and puts those together in a certain kind of obscure way.
link |
01:18:06.640
And the thing doesn't even move downhill all the time.
link |
01:18:09.280
It sometimes goes back uphill.
link |
01:18:10.760
And if you're a physicist, that kind of makes some sense.
link |
01:18:13.160
You're building up some momentum and that is kind of the right intuition, but that intuition
link |
01:18:17.720
is not enough to understand kind of how to do it and why it works.
link |
01:18:22.460
But it does.
link |
01:18:23.460
It achieves one over k squared and it has a mathematical structure and it's still kind
link |
01:18:27.520
of to this day, a lot of us are writing papers and trying to explore that and understand
link |
01:18:31.160
it.
link |
01:18:32.560
So there are lots of cool ideas and optimization, but just kind of using gradients, I think
link |
01:18:36.680
is number one that goes back, you know, 150 years.
link |
01:18:40.760
And then Nesterov, I think has made a major contribution with this idea.
link |
01:18:43.580
So like you said, gradients themselves are in some sense, mysterious.
link |
01:18:47.840
They're not as trivial as...
link |
01:18:50.440
Not as trivial.
link |
01:18:52.040
Coordinate descent is more of a trivial one.
link |
01:18:54.080
You just pick one of the coordinates.
link |
01:18:55.080
That's how we think.
link |
01:18:56.080
That's how our human mind thinks.
link |
01:18:57.080
That's how our human minds think.
link |
01:18:58.200
And gradients are not that easy for our human mind to grapple with.
link |
01:19:03.280
An absurd question, but what is statistics?
link |
01:19:08.600
So here it's a little bit, it's somewhere between math and science and technology.
link |
01:19:12.160
It's somewhere in that convex hole.
link |
01:19:13.420
So it's a set of principles that allow you to make inferences that have got some reason
link |
01:19:17.720
to be believed and also principles that allow you to make decisions where you can have some
link |
01:19:22.640
reason to believe you're not going to make errors.
link |
01:19:25.100
So all of that requires some assumptions about what do you mean by an error?
link |
01:19:27.680
What do you mean by the probabilities?
link |
01:19:31.420
But after you start making some of those assumptions, you're led to conclusions that, yes, I can
link |
01:19:38.080
guarantee that if you do this in this way, your probability of making an error will be
link |
01:19:42.080
small.
link |
01:19:43.600
Your probability of continuing to not make errors over time will be small.
link |
01:19:47.880
And the probability that you found something that's real will be small, will be high.
link |
01:19:52.280
So decision making is a big part of that.
link |
01:19:54.640
Decision making is a big part.
link |
01:19:55.640
Yeah.
link |
01:19:56.640
So statistics, short history was that, it goes back as a formal discipline, 250 years
link |
01:20:03.600
or so.
link |
01:20:04.960
It was called inverse probability because around that era, probability was developed
link |
01:20:09.280
sort of especially to explain gambling situations.
link |
01:20:12.000
Of course, interesting.
link |
01:20:15.480
So you would say, well, given the state of nature is this, there's a certain roulette
link |
01:20:18.880
board that has a certain mechanism and what kind of outcomes do I expect to see?
link |
01:20:23.680
And especially if I do things long amounts of time, what outcomes will I see?
link |
01:20:27.440
And the physicists started to pay attention to this.
link |
01:20:30.640
And then people said, well, let's turn the problem around.
link |
01:20:33.500
What if I saw certain outcomes, could I infer what the underlying mechanism was?
link |
01:20:37.480
That's an inverse problem.
link |
01:20:38.480
And in fact, for quite a while, statistics was called inverse probability.
link |
01:20:41.640
That was the name of the field.
link |
01:20:44.060
And I believe that it was Laplace who was working in Napoleon's government who needed
link |
01:20:50.600
to do a census of France, learn about the people there.
link |
01:20:54.280
So he went and gathered data and he analyzed that data to determine policy and said, well,
link |
01:21:01.240
let's call this field that does this kind of thing statistics because the word state
link |
01:21:06.760
is in there.
link |
01:21:07.760
In French, that's etat, but it's the study of data for the state.
link |
01:21:12.360
So anyway, that caught on and it's been called statistics ever since.
link |
01:21:18.640
But by the time it got formalized, it was sort of in the 30s.
link |
01:21:23.280
And around that time, there was game theory and decision theory developed nearby.
link |
01:21:28.560
People in that era didn't think of themselves as either computer science or statistics or
link |
01:21:31.640
control or econ.
link |
01:21:32.640
They were all the above.
link |
01:21:34.540
And so Von Neumann is developing game theory, but also thinking of that as decision theory.
link |
01:21:39.320
Wald is an econometrician developing decision theory and then turning that into statistics.
link |
01:21:45.120
And so it's all about, here's not just data and you analyze it, here's a loss function.
link |
01:21:50.160
Here's what you care about.
link |
01:21:51.160
Here's the question you're trying to ask.
link |
01:21:53.080
Here is a probability model and here's the risk you will face if you make certain decisions.
link |
01:21:59.440
And to this day, in most advanced statistical curricula, you teach decision theory as the
link |
01:22:04.040
starting point and then it branches out into the two branches of Bayesian and frequentist.
link |
01:22:08.500
But that's all about decisions.
link |
01:22:11.840
In statistics, what is the most beautiful, mysterious, maybe surprising idea that you've
link |
01:22:19.040
come across?
link |
01:22:20.040
Yeah, good question.
link |
01:22:21.040
I mean, there's a bunch of surprising ones.
link |
01:22:27.640
There's something that's way too technical for this thing, but something called James
link |
01:22:30.320
Stein estimation, which is kind of surprising and really takes time to wrap your head around.
link |
01:22:36.040
Can you try to maybe...
link |
01:22:37.040
I think I don't want to even want to try.
link |
01:22:39.120
Let me just say a colleague at Steven Stigler at University of Chicago wrote a really beautiful
link |
01:22:44.200
paper on James Stein estimation, which helps to...
link |
01:22:47.200
It's views a paradox.
link |
01:22:48.600
It kind of defeats the mind's attempts to understand it, but you can and Steve has a
link |
01:22:52.240
nice perspective on that.
link |
01:22:56.560
So one of the troubles with statistics is that it's like in physics that are in quantum
link |
01:23:00.320
physics, you have multiple interpretations.
link |
01:23:02.520
There's a wave and particle duality in physics and you get used to that over time, but it
link |
01:23:07.600
still kind of haunts you that you don't really quite understand the relationship.
link |
01:23:11.680
The electron's a wave and electron's a particle.
link |
01:23:15.840
Well the same thing happens here.
link |
01:23:16.840
There's Bayesian ways of thinking and frequentist, and they are different.
link |
01:23:21.320
They sometimes become sort of the same in practice, but they are physically different.
link |
01:23:25.000
And then in some practice, they are not the same at all.
link |
01:23:27.640
They give you rather different answers.
link |
01:23:30.480
And so it is very much like wave and particle duality, and that is something that you have
link |
01:23:33.860
to kind of get used to in the field.
link |
01:23:35.840
Can you define Bayesian and frequentist?
link |
01:23:37.720
Yeah in decision theory you can make, I have a video that people could see.
link |
01:23:41.320
It's called are you a Bayesian or a frequentist and kind of help try to make it really clear.
link |
01:23:46.040
It comes from decision theory.
link |
01:23:47.160
So you know, decision theory, you're talking about loss functions, which are a function
link |
01:23:51.920
of data X and parameter theta.
link |
01:23:54.760
They're a function of two arguments.
link |
01:23:57.080
Okay.
link |
01:23:58.080
Neither one of those arguments is known.
link |
01:23:59.880
You don't know the data a priori.
link |
01:24:01.640
It's random and the parameters unknown.
link |
01:24:03.760
All right.
link |
01:24:04.760
So you have a function of two things you don't know, and you're trying to say, I want that
link |
01:24:07.240
function to be small.
link |
01:24:08.240
I want small loss, right?
link |
01:24:10.880
Well what are you going to do?
link |
01:24:13.440
So you sort of say, well, I'm going to average over these quantities or maximize over them
link |
01:24:17.280
or something so that, you know, I turn that uncertainty into something certain.
link |
01:24:23.120
So you could look at the first argument and average over it, or you could look at the
link |
01:24:25.920
second argument and average over it.
link |
01:24:27.040
That's Bayesian and frequentist.
link |
01:24:28.040
So the frequentist says, I'm going to look at the X, the data, and I'm going to take
link |
01:24:32.840
that as random and I'm going to average over the distribution.
link |
01:24:35.360
So I take the expectation loss under X. Theta is held fixed, right?
link |
01:24:40.700
That's called the risk.
link |
01:24:42.140
And so it's looking at other, all the data sets you could get, right?
link |
01:24:46.480
And say, how well will a certain procedure do under all those data sets?
link |
01:24:50.200
That's called a frequentist guarantee, right?
link |
01:24:52.560
So I think it is very appropriate when like you're building a piece of software and you're
link |
01:24:56.080
shipping it out there and people are using it on all kinds of data sets.
link |
01:24:59.280
You want to have a stamp, a guarantee on it that as people run it on many, many data sets
link |
01:25:02.600
that you never even thought about that 95% of the time it will do the right thing.
link |
01:25:07.720
Perfectly reasonable.
link |
01:25:09.800
The Bayesian perspective says, well, no, I'm going to look at the other argument of the
link |
01:25:13.240
loss function, the theta part, okay?
link |
01:25:15.240
That's unknown and I'm uncertain about it.
link |
01:25:17.600
So I could have my own personal probability for what it is, you know, how many tall people
link |
01:25:21.560
are there out there?
link |
01:25:22.560
I'm trying to infer the average height of the population while I have an idea roughly
link |
01:25:25.160
what the height is.
link |
01:25:27.440
So I'm going to average over the theta.
link |
01:25:32.200
So now that loss function as only now, again, one argument's gone, now it's a function of
link |
01:25:37.760
X and that's what a Bayesian does is they say, well, let's just focus on the particular
link |
01:25:41.760
X we got, the data set we got, we condition on that.
link |
01:25:45.360
Conditional on the X, I say something about my loss.
link |
01:25:48.240
That's a Bayesian approach to things.
link |
01:25:50.480
And the Bayesian will argue that it's not relevant to look at all the other data sets
link |
01:25:54.360
you could have gotten and average over them, the frequentist approach.
link |
01:25:58.800
It's really only the data sets you got, right?
link |
01:26:02.080
And I do agree with that, especially in situations where you're working with a scientist, you
link |
01:26:06.000
can learn a lot about the domain and you're really only focused on certain kinds of data
link |
01:26:09.440
and you gathered your data and you make inferences.
link |
01:26:13.320
I don't agree with it though, that, you know, in the sense that there are needs for frequentist
link |
01:26:17.600
guarantees, you're writing software, people are using it out there, you want to say something.
link |
01:26:20.880
So these two things have to got to fight each other a little bit, but they have to blend.
link |
01:26:24.880
So long story short, there's a set of ideas that are right in the middle that are called
link |
01:26:27.880
empirical Bayes.
link |
01:26:29.880
And empirical Bayes sort of starts with the Bayesian framework.
link |
01:26:34.600
It's kind of arguably philosophically more, you know, reasonable and kosher.
link |
01:26:40.680
Write down a bunch of the math that kind of flows from that, and then realize there's
link |
01:26:44.120
a bunch of things you don't know because it's the real world and you don't know everything.
link |
01:26:48.040
So you're uncertain about certain quantities.
link |
01:26:50.160
At that point, ask, is there a reasonable way to plug in an estimate for those things?
link |
01:26:54.440
Okay.
link |
01:26:55.440
And in some cases, there's quite a reasonable thing to do, to plug in, there's a natural
link |
01:27:00.480
thing you can observe in the world that you can plug in and then do a little bit more
link |
01:27:04.000
mathematics and assure yourself it's really good.
link |
01:27:06.440
So based on math or based on human expertise, what's, what, what are good?
link |
01:27:09.800
Oh, they're both going in.
link |
01:27:10.800
The Bayesian framework allows you to put a lot of human expertise in, but the math kind
link |
01:27:16.160
of guides you along that path and then kind of reassures you the end, you could put that
link |
01:27:19.480
stamp of approval under certain assumptions, this thing will work.
link |
01:27:22.780
So you asked the question, what's my favorite, you know, or what's the most surprising, nice
link |
01:27:25.960
idea.
link |
01:27:26.960
So one that is more accessible is something called false discovery rate, which is, you
link |
01:27:31.760
know, you're making not just one hypothesis test or making one decision, you're making
link |
01:27:35.520
a whole bag of them.
link |
01:27:37.440
And in that bag of decisions, you look at the ones where you made a discovery, you announced
link |
01:27:41.800
that something interesting had happened.
link |
01:27:43.320
All right.
link |
01:27:44.320
That's going to be some subset of your big bag.
link |
01:27:47.160
In the ones you made a discovery, which subset of those are bad?
link |
01:27:50.880
Or false, false discoveries.
link |
01:27:53.320
You'd like the fraction of your false discoveries among your discoveries to be small.
link |
01:27:57.680
That's a different criterion than accuracy or precision or recall or sensitivity and
link |
01:28:02.480
specificity.
link |
01:28:03.480
It's a different quantity.
link |
01:28:04.960
Those latter ones are almost all of them have more of a frequentist flavor.
link |
01:28:09.960
They say, given the truth is that the null hypothesis is true.
link |
01:28:13.960
Here's what accuracy I would get, or given that the alternative is true, here's what
link |
01:28:17.400
I would get.
link |
01:28:18.400
So it's kind of going forward from the state of nature to the data.
link |
01:28:22.360
The Bayesian goes the other direction from the data back to the state of nature.
link |
01:28:25.920
And that's actually what false discovery rate is.
link |
01:28:28.180
It says, given you made a discovery, okay, that's conditioned on your data.
link |
01:28:32.680
What's the probability of the hypothesis?
link |
01:28:34.960
It's going the other direction.
link |
01:28:36.920
And so the classical frequency look at that, well, I can't know that there's some priors
link |
01:28:41.000
needed in that.
link |
01:28:42.600
And the empirical Bayesian goes ahead and plows forward and starts writing down these formulas
link |
01:28:47.460
and realizes at some point, some of those things can actually be estimated in a reasonable
link |
01:28:51.280
way.
link |
01:28:52.600
And so it's kind of, it's a beautiful set of ideas.
link |
01:28:54.220
So I, this kind of line of argument has come out.
link |
01:28:56.800
It's not certainly mine, but it sort of came out from Robbins around 1960.
link |
01:29:02.320
Brad Efron has written beautifully about this in various papers and books.
link |
01:29:07.320
And the FDR is, you know, Benjamin in Israel, John Story did this Bayesian interpretation
link |
01:29:14.120
and so on.
link |
01:29:15.120
And he used to absorb these things over the years and find it a very healthy way to think
link |
01:29:18.480
about statistics.
link |
01:29:21.280
Let me ask you about intelligence to jump slightly back out into philosophy, perhaps.
link |
01:29:28.240
You said that maybe you can elaborate, but you said that defining just even the question
link |
01:29:33.940
of what is intelligence is a very difficult question.
link |
01:29:38.800
Is it a useful question?
link |
01:29:39.800
Do you think we'll one day understand the fundamentals of human intelligence and what
link |
01:29:45.240
it means, you know, have good benchmarks for general intelligence that we put before our
link |
01:29:51.880
machines?
link |
01:29:53.520
So I don't work on these topics so much that you're really asking the question for a psychologist
link |
01:29:58.240
really.
link |
01:29:59.240
And I studied some, but I don't consider myself at least an expert at this point.
link |
01:30:04.440
You know, a psychologist aims to understand human intelligence, right?
link |
01:30:07.680
And I think many psychologists I know are fairly humble about this.
link |
01:30:10.960
They might try to understand how a baby understands, you know, whether something's a solid or liquid
link |
01:30:15.880
or whether something's hidden or not.
link |
01:30:18.720
And maybe how a child starts to learn the meaning of certain words, what's a verb, what's
link |
01:30:24.400
a noun and also, you know, slowly but surely trying to figure out things.
link |
01:30:30.580
But humans ability to take a really complicated environment, reason about it, abstract about
link |
01:30:35.720
it, find the right abstractions, communicate about it, interact and so on is just, you
link |
01:30:41.520
know, really staggeringly rich and complicated.
link |
01:30:46.920
And so, you know, I think in all humility, we don't think we're kind of aiming for that
link |
01:30:51.320
in the near future.
link |
01:30:52.320
A certain psychologist doing experiments with babies in the lab or with people talking has
link |
01:30:56.820
a much more limited aspiration.
link |
01:30:58.920
And you know, Kahneman and Tversky would look at our reasoning patterns and they're not
link |
01:31:02.120
deeply understanding all the how we do our reasoning, but they're sort of saying, hey,
link |
01:31:05.880
here's some oddities about the reasoning and some things you should think about it.
link |
01:31:09.480
But also, as I emphasize in some things I've been writing about, you know, AI, the revolution
link |
01:31:14.560
hasn't happened yet.
link |
01:31:15.560
Yeah.
link |
01:31:16.560
Great blog post.
link |
01:31:17.560
I've been emphasizing that, you know, if you step back and look at intelligent systems
link |
01:31:22.580
of any kind and whatever you mean by intelligence, it's not just the humans or the animals or,
link |
01:31:26.800
you know, the plants or whatever, you know, so a market that brings goods into a city,
link |
01:31:31.680
you know, food to restaurants or something every day is a system.
link |
01:31:35.680
It's a decentralized set of decisions.
link |
01:31:37.820
Looking at it from far enough away, it's just like a collection of neurons.
link |
01:31:40.840
Every neuron is making its own little decisions, presumably in some way.
link |
01:31:44.600
And if you step back enough, every little part of an economic system is making all of
link |
01:31:48.000
its decisions.
link |
01:31:49.560
And just like with the brain, who knows what an individual neuron does and what the overall
link |
01:31:53.020
goal is, right?
link |
01:31:54.800
But something happens at some aggregate level, same thing with the economy.
link |
01:31:58.560
People eat in a city and it's robust.
link |
01:32:01.380
It works at all scales, small villages to big cities.
link |
01:32:04.840
It's been working for thousands of years.
link |
01:32:07.040
It works rain or shine, so it's adaptive.
link |
01:32:10.520
So all the kind of, you know, those are adjectives one tends to apply to intelligent systems.
link |
01:32:14.680
Robust, adaptive, you know, you don't need to keep adjusting it, self healing, whatever.
link |
01:32:19.960
Plus not perfect.
link |
01:32:20.960
You know, intelligences are never perfect and markets are not perfect.
link |
01:32:24.680
But I do not believe in this era that you cannot, that you can say, well, our computers
link |
01:32:28.160
are, our humans are smart, but you know, no markets are not, more markets are.
link |
01:32:31.760
So they are intelligent.
link |
01:32:34.080
Now we humans didn't evolve to be markets.
link |
01:32:38.160
We've been participating in them, right?
link |
01:32:40.320
But we are not ourselves a market per se.
link |
01:32:43.280
The neurons could be viewed as the market.
link |
01:32:45.920
There's economic, you know, neuroscience kind of perspective.
link |
01:32:48.200
That's interesting to pursue all that.
link |
01:32:50.320
The point though is, is that if you were to study humans and really be the world's best
link |
01:32:54.200
psychologist studied for thousands of years and come up with the theory of human intelligence,
link |
01:32:57.440
you might have never discovered principles of markets, you know, supply demand curves
link |
01:33:01.840
and you know, matching and auctions and all that.
link |
01:33:05.000
Those are real principles and they lead to a form of intelligence that's not maybe human
link |
01:33:08.760
intelligence.
link |
01:33:09.760
It's arguably another kind of intelligence.
link |
01:33:11.480
There probably are third kinds of intelligence or fourth that none of us are really thinking
link |
01:33:14.880
too much about right now.
link |
01:33:16.480
So if you really, and then all of those are relevant to computer systems in the future.
link |
01:33:20.840
Certainly the market one is relevant right now.
link |
01:33:23.880
Whereas the understanding of human intelligence is not so clear that it's relevant right now.
link |
01:33:27.440
Probably not.
link |
01:33:29.360
So if you want general intelligence, whatever one means by that, or, you know, understanding
link |
01:33:33.160
intelligence in a deep sense and all that, it is definitely has to be not just human
link |
01:33:37.000
intelligence.
link |
01:33:38.000
It's gotta be this broader thing.
link |
01:33:39.280
And that's not a mystery.
link |
01:33:40.480
Markets are intelligent.
link |
01:33:41.480
So, you know, it's definitely not just a philosophical stance to say we've got to move beyond intelligence.
link |
01:33:46.000
That sounds ridiculous.
link |
01:33:47.000
Yeah.
link |
01:33:48.000
But it's not.
link |
01:33:49.000
And in that blog post, you define different kinds of like intelligent infrastructure,
link |
01:33:52.160
AI, which I really like is some of the concepts you've just been describing.
link |
01:33:58.040
Do you see ourselves, if we see earth, human civilization as a single organism, do you
link |
01:34:02.720
think the intelligence of that organism, when you think from the perspective of markets
link |
01:34:06.980
and intelligence infrastructure is increasing, is it increasing linearly?
link |
01:34:12.340
Is it increasing exponentially?
link |
01:34:14.240
What do you think the future of that intelligence?
link |
01:34:16.000
Yeah, I don't know.
link |
01:34:17.000
I don't tend to think, I don't tend to answer questions like that because you know, that's
link |
01:34:20.560
science fiction.
link |
01:34:21.560
I'm hoping to catch you off guard.
link |
01:34:25.200
Well again, because you said it's so far in the future, it's fun to ask and you'll probably,
link |
01:34:31.320
you know, like you said, predicting the future is really nearly impossible.
link |
01:34:36.440
But say as an axiom, one day we create a human level, a superhuman level intelligent, not
link |
01:34:43.720
the scale of markets, but the scale of an individual.
link |
01:34:47.560
What do you think it is, what do you think it would take to do that?
link |
01:34:51.760
Or maybe to ask another question is how would that system be different than the biological
link |
01:34:58.880
human beings that we see around us today?
link |
01:35:01.480
Is it possible to say anything interesting to that question or is it just a stupid question?
link |
01:35:06.160
It's not a stupid question, but it's science fiction.
link |
01:35:08.200
Science fiction.
link |
01:35:09.200
And so I'm totally happy to read science fiction and think about it from time in my own life.
link |
01:35:13.400
I loved, there was this like brain in a vat kind of, you know, little thing that people
link |
01:35:17.480
were talking about when I was a student, I remember, you know, imagine that, you know,
link |
01:35:22.680
between your brain and your body, there's a, you know, there's a bunch of wires, right?
link |
01:35:26.960
And suppose that every one of them was replaced with a literal wire.
link |
01:35:31.480
And then suppose that wire was turned in actually a little wireless, you know, there's a receiver
link |
01:35:35.000
and sender.
link |
01:35:36.000
So the brain has got all the senders and receiver, you know, on all of its exiting, you know,
link |
01:35:41.560
axons and all the dendrites down to the body have replaced with senders and receivers.
link |
01:35:45.920
Now you could move the body off somewhere and put the brain in a vat, right?
link |
01:35:50.080
And then you could do things like start killing off those senders and receivers one by one.
link |
01:35:54.600
And after you've killed off all of them, where is that person?
link |
01:35:56.960
You know, they thought they were out in the body walking around the world and they moved
link |
01:35:59.640
on.
link |
01:36:00.640
So those are science fiction things.
link |
01:36:01.640
Those are fun to think about.
link |
01:36:02.640
It's just intriguing about where is, what is thought, where is it and all that.
link |
01:36:05.760
And I think every 18 year old should take philosophy classes and think about these things.
link |
01:36:10.680
And I think that everyone should think about what could happen in society that's kind of
link |
01:36:13.440
bad and all that.
link |
01:36:14.440
But I really don't think that's the right thing for most of us that are my age group
link |
01:36:17.600
to be doing and thinking about.
link |
01:36:19.480
I really think that we have so many more present, you know, first challenges and dangers and
link |
01:36:26.720
real things to build and all that such that, you know, spending too much time on science
link |
01:36:32.320
fiction, at least in public for like this, I think is not what we should be doing.
link |
01:36:36.080
Maybe over beers in private.
link |
01:36:37.600
That's right.
link |
01:36:38.600
Well, I'm not going to broadcast where I have beers because this is going to go on Facebook
link |
01:36:43.600
and I don't want a lot of people showing up there.
link |
01:36:45.480
But yeah, I'll, I love Facebook, Twitter, Amazon, YouTube.
link |
01:36:51.640
I have I'm optimistic and hopeful, but maybe, maybe I don't have grounds for such optimism
link |
01:36:58.280
and hope.
link |
01:36:59.280
But let me ask, you've mentored some of the brightest sort of some of the seminal figures
link |
01:37:07.160
in the field.
link |
01:37:08.160
Can you give advice to people who are undergraduates today?
link |
01:37:14.080
What does it take to take, you know, advice on their journey if they're interested in
link |
01:37:17.640
machine learning and in the ideas of markets from economics and psychology and all the
link |
01:37:23.920
kinds of things that you've exploring?
link |
01:37:25.680
What steps should they take on that journey?
link |
01:37:27.960
Well, yeah, first of all, the door is open and second, it's a journey.
link |
01:37:30.360
I like your language there.
link |
01:37:33.880
It is not that you're so brilliant and you have great, brilliant ideas and therefore
link |
01:37:37.120
that's just, you know, that's how you have success or that's how you enter into the field.
link |
01:37:42.440
It's that you apprentice yourself, you spend a lot of time, you work on hard things, you
link |
01:37:48.480
try and pull back and you be as broad as you can, you talk to lots of people.
link |
01:37:53.880
And it's like entering in any kind of a creative community.
link |
01:37:57.000
There's years that are needed and human connections are critical to it.
link |
01:38:01.600
So, you know, I think about, you know, being a musician or being an artist or something,
link |
01:38:06.080
you don't just, you know, immediately from day one, you know, you're a genius and therefore
link |
01:38:10.600
you do it.
link |
01:38:11.600
No, you, you know, practice really, really hard on basics and you be humble about where
link |
01:38:18.900
you are and then, and you realize you'll never be an expert on everything.
link |
01:38:22.200
So you kind of pick and there's a lot of randomness and a lot of kind of luck, but luck just kind
link |
01:38:29.460
of picks out which branch of the tree you go down, but you'll go down some branch.
link |
01:38:33.960
So yeah, it's a community.
link |
01:38:35.460
So the graduate school is, I still think is one of the wonderful phenomena that we have
link |
01:38:39.200
in our, in our world.
link |
01:38:40.780
It's very much about apprenticeship with an advisor.
link |
01:38:43.160
It's very much about a group of people you belong to.
link |
01:38:45.780
It's a four or five year process.
link |
01:38:47.020
So it's plenty of time to start from kind of nothing to come up to something, you know,
link |
01:38:51.580
more, more expertise, and then to start to have your own creativity start to flower,
link |
01:38:54.700
even surprising your own self.
link |
01:38:58.240
And it's a very cooperative endeavor.
link |
01:38:59.760
I think a lot of people think of science as highly competitive and I think in some other
link |
01:39:05.620
fields it might be more so.
link |
01:39:08.080
Here it's way more cooperative than you might imagine.
link |
01:39:11.860
And people are always teaching each other something and people are always more than
link |
01:39:14.660
happy to be clear that, so I feel I'm an expert on certain kinds of things, but I'm very much
link |
01:39:20.000
not expert on lots of other things and a lot of them are relevant and a lot of them are,
link |
01:39:23.480
I should know, but should in some society, you know, you don't.
link |
01:39:26.320
So I'm always willing to reveal my ignorance to people around me so they can teach me things.
link |
01:39:32.100
And I think a lot of us feel that way about our field.
link |
01:39:34.220
So it's very cooperative.
link |
01:39:35.460
I might add it's also very international because it's so cooperative.
link |
01:39:39.140
We see no barriers.
link |
01:39:40.780
And so that the nationalism that you see, especially in the current era and everything
link |
01:39:44.460
is just at odds with the way that most of us think about what we're doing here, where
link |
01:39:48.180
this is a human endeavor and we cooperate and are very much trying to do it together
link |
01:39:53.420
for the, you know, the benefit of everybody.
link |
01:39:56.580
So last question, where and how and why did you learn French and which language is more
link |
01:40:02.820
beautiful English or French?
link |
01:40:05.660
Great question.
link |
01:40:06.660
So first of all, I think Italian is actually more beautiful than French and English.
link |
01:40:10.100
And I also speak that.
link |
01:40:11.100
So I'm married to an Italian and I have kids and we speak Italian.
link |
01:40:15.860
Anyway, all kidding aside, every language allows you to express things a bit differently.
link |
01:40:23.180
And it is one of the great fun things to do in life is to explore those things.
link |
01:40:26.820
So in fact, when I kids or teens or college students ask me what they study, I say, well,
link |
01:40:34.540
do what your heart, where your heart is, certainly do a lot of math.
link |
01:40:36.980
Math is good for everybody, but do some poetry and do some history and do some language too.
link |
01:40:42.500
You know, throughout your life, you'll want to be a thinking person.
link |
01:40:44.620
You'll want to have done that.
link |
01:40:47.500
For me, French I learned when I was, I'd say a late teen, I was living in the middle of
link |
01:40:54.700
the country in Kansas and not much was going on in Kansas with all due respect to Kansas.
link |
01:41:01.100
And so my parents happened to have some French books on the shelf and just in my boredom,
link |
01:41:04.380
I pulled them down and I found this is fun.
link |
01:41:07.140
And I kind of learned the language by reading.
link |
01:41:09.220
And when I first heard it spoken, I had no idea what was being spoken, but I realized
link |
01:41:13.540
I had somehow knew it from some previous life and so I made the connection.
link |
01:41:18.540
But then I traveled and just I love to go beyond my own barriers and my own comfort
link |
01:41:23.500
or whatever.
link |
01:41:24.500
And I found myself on trains in France next to say older people who had lived a whole
link |
01:41:29.460
life of their own.
link |
01:41:30.460
And the ability to communicate with them was special and the ability to also see myself
link |
01:41:37.900
in other people's shoes and have empathy and kind of work on that language as part of that.
link |
01:41:43.100
So after that kind of experience and also embedding myself in French culture, which
link |
01:41:49.140
is quite amazing, languages are rich, not just because there's something inherently
link |
01:41:53.780
beautiful about it, but it's all the creativity that went into it.
link |
01:41:55.940
So I learned a lot of songs, read poems, read books.
link |
01:41:59.900
And then I was here actually at MIT where we're doing the podcast today and a young
link |
01:42:05.300
professor not yet married and not having a lot of friends in the area.
link |
01:42:11.960
So I just didn't have, I was kind of a bored person.
link |
01:42:13.980
I said, I heard a lot of Italians around.
link |
01:42:16.020
There's happened to be a lot of Italians at MIT, an Italian professor for some reason.
link |
01:42:20.020
And so I was kind of vaguely understanding what they were talking about.
link |
01:42:22.060
I said, well, I should learn this language too.
link |
01:42:23.660
So I did.
link |
01:42:25.620
And then later met my spouse and Italian became a part of my life.
link |
01:42:30.860
But I go to China a lot these days.
link |
01:42:32.180
I go to Asia, I go to Europe and every time I go, I kind of am amazed by the richness
link |
01:42:38.160
of human experience and the people don't have any idea if you haven't traveled, kind of
link |
01:42:42.820
how amazingly rich and I love the diversity.
link |
01:42:46.900
It's not just a buzzword to me.
link |
01:42:48.060
It really means something.
link |
01:42:49.060
I love to embed myself with other people's experiences.
link |
01:42:53.180
And so yeah, learning language is a big part of that.
link |
01:42:56.420
I think I've said in some interview at some point that if I had millions of dollars and
link |
01:43:00.460
infinite time or whatever, what would you really work on if you really wanted to do
link |
01:43:03.300
AI?
link |
01:43:04.300
And for me, that is natural language and really done right.
link |
01:43:07.360
Deep understanding of language.
link |
01:43:09.840
That's to me, an amazingly interesting scientific challenge.
link |
01:43:13.580
One we're very far away on.
link |
01:43:15.180
One we're very far away, but good natural language.
link |
01:43:17.720
People are kind of really invested then.
link |
01:43:19.140
I think a lot of them see that's where the core of AI is that if you understand that
link |
01:43:22.460
you really help human communication, you understand something about the human mind, the semantics
link |
01:43:26.580
that come out of the human mind and I agree, I think that will be such a long time.
link |
01:43:30.980
So I didn't do that in my career just cause I kind of, I was behind in the early days.
link |
01:43:34.720
I didn't kind of know enough of that stuff.
link |
01:43:36.460
I was at MIT, I didn't learn much language and it was too late at some point to kind
link |
01:43:41.180
of spend a whole career doing that, but I admire that field and so in my little way
link |
01:43:47.180
by learning language, you know, kind of that part of my brain has been trained up.
link |
01:43:53.340
Jan was right.
link |
01:43:55.460
You truly are the Miles Davis of machine learning.
link |
01:43:57.460
I don't think there's a better place than it.
link |
01:43:59.620
Mike it was a huge honor talking to you today.
link |
01:44:01.580
Merci beaucoup.
link |
01:44:02.580
All right.
link |
01:44:03.580
It's been my pleasure.
link |
01:44:04.580
Thanks for listening to this conversation with Michael I. Jordan and thank you to our
link |
01:44:09.300
presenting sponsor, Cash App.
link |
01:44:11.420
Download it, use code LEXPodcast, you'll get $10 and $10 will go to FIRST, an organization
link |
01:44:18.100
that inspires and educates young minds to become science and technology innovators of
link |
01:44:22.580
tomorrow.
link |
01:44:23.880
If you enjoy this podcast, subscribe on YouTube, give it five stars on Apple Podcast, support
link |
01:44:28.820
on Patreon, or simply connect with me on Twitter at Lex Friedman.
link |
01:44:34.700
And now let me leave you with some words of wisdom from Michael I. Jordan from his blog
link |
01:44:39.340
post titled Artificial Intelligence, the revolution hasn't happened yet, calling for broadening
link |
01:44:45.580
the scope of the AI field.
link |
01:44:48.560
We should embrace the fact that what we are witnessing is the creation of a new branch
link |
01:44:52.860
of engineering.
link |
01:44:54.340
The term engineering is often invoked in a narrow sense in academia and beyond with overtones
link |
01:45:00.660
of cold, effectless machinery and negative connotations of loss of control by humans.
link |
01:45:07.540
But an engineering discipline can be what we want it to be.
link |
01:45:11.020
In the current era, we have a real opportunity to conceive of something historically new,
link |
01:45:16.340
a human centric engineering discipline.
link |
01:45:19.740
I will resist giving this emerging discipline a name, but if the acronym AI continues to
link |
01:45:24.380
be used, let's be aware of the very real limitations of this placeholder.
link |
01:45:29.860
Let's broaden our scope, tone down the hype, and recognize the serious challenges ahead.
link |
01:45:37.300
Thank you for listening and hope to see you next time.