back to index

Michael I. Jordan: Machine Learning, Recommender Systems, and Future of AI | Lex Fridman Podcast #74


small model | large model

link |
00:00:00.000
The following is a conversation with Michael I. Jordan, a professor at Berkeley and one
link |
00:00:05.600
of the most influential people in the history of machine learning, statistics, and artificial
link |
00:00:10.280
intelligence.
link |
00:00:11.280
He has been cited over 170,000 times and has mentored many of the world class researchers
link |
00:00:17.640
defining the field of AI today, including Andrew Eng, Zubin Garamani, Bantaskar, and
link |
00:00:25.480
Yoshio Benjo.
link |
00:00:27.600
All this, to me, is as impressive as the over 32,000 points in the six NBA championships
link |
00:00:34.560
of the Michael J. Jordan of basketball fame.
link |
00:00:38.880
There's a nonzero probability that I'd talk to the other Michael Jordan given my connection
link |
00:00:43.200
to and love of the Chicago Bulls of the 90s, but if I had to pick one, I'm going with the
link |
00:00:48.520
Michael Jordan of statistics and computer science, or as Jan Lacoon calls him, the Miles
link |
00:00:54.160
Davis of machine learning.
link |
00:00:56.160
In his blog post titled Artificial Intelligence, The Revolution Hasn't Happened Yet, Michael
link |
00:01:01.560
argues for broadening the scope or the artificial intelligence field.
link |
00:01:06.040
In many ways, the underlying spirit of this podcast is the same, to see artificial intelligence
link |
00:01:12.080
as a deeply human endeavor, to not only engineer algorithms and robots, but to understand and
link |
00:01:18.640
empower human beings at all levels of abstraction, from the individual to our civilization as
link |
00:01:25.080
a whole.
link |
00:01:26.760
This is the Artificial Intelligence Podcast.
link |
00:01:29.440
If you enjoy it, subscribe on YouTube, give us five stars at Apple Podcast, support it
link |
00:01:34.040
on Patreon, or simply connect with me on Twitter, at Lex Friedman, spelled F R I D M A N.
link |
00:01:42.120
As usual, I'll do one or two minutes of ads now and never any ads in the middle that can
link |
00:01:46.640
break the flow of the conversation.
link |
00:01:48.680
I hope that works for you and doesn't hurt the listening experience.
link |
00:01:53.960
This show is presented by Cash App, the number one finance app in the App Store.
link |
00:01:58.320
When you get it, use code LEX Podcast.
link |
00:02:01.920
Cash App lets you send money to friends, buy Bitcoin, and invest in the stock market with
link |
00:02:06.480
as little as $1.
link |
00:02:08.480
Since Cash App does fractional share trading, let me mention that the order execution algorithm
link |
00:02:13.480
that works behind the scenes to create the abstraction of the fractional orders is to
link |
00:02:18.440
me an algorithmic marvel.
link |
00:02:20.960
So big props for the Cash App engineers for solving a hard problem that in the end provides
link |
00:02:26.400
an easy interface that takes a step up to the next layer of abstraction over the stock
link |
00:02:31.120
market, making trading more accessible for new investors and diversification much easier.
link |
00:02:38.440
So once again, if you get Cash App from the App Store, Google Play, and use the code LEX
link |
00:02:43.120
Podcast, you'll get $10 and Cash App will also donate $10 to first, one of my favorite
link |
00:02:49.280
organizations that is helping to advance robotics and STEM education for young people around
link |
00:02:55.120
the world.
link |
00:02:57.120
And now, here's my conversation with Michael I. Jordan.
link |
00:03:02.720
Given that you're one of the greats in the field of AI, machine learning, computer science,
link |
00:03:06.560
and so on, you're trivially called the Michael Jordan of machine learning, although as you
link |
00:03:13.960
know, you were born first, so technically MJ is the Michael I. Jordan of basketball,
link |
00:03:19.760
but anyway, my favorite is Yanlacoon calling you the Miles Davis of machine learning, because
link |
00:03:25.560
as he says, you reinvent yourself periodically and sometimes leave fans scratching their
link |
00:03:30.640
heads after you change direction.
link |
00:03:32.480
So can you put at first your historian hat on and give a history of computer science and
link |
00:03:39.000
AI as you saw it, as you experienced it, including the four generations of AI successes that
link |
00:03:46.200
I've seen you talk about?
link |
00:03:48.000
Sure.
link |
00:03:49.000
Yeah.
link |
00:03:50.000
First of all, I much prefer Yan's metaphor.
link |
00:03:54.200
Miles Davis was a real explorer in jazz, and he had a coherent story.
link |
00:04:00.000
So I think I have one, but it's not just the one you lived.
link |
00:04:03.320
It's the one you think about later.
link |
00:04:04.920
What a good historian does is they look back and they revisit.
link |
00:04:09.920
I think what's happening right now is not AI.
link |
00:04:13.360
That was an intellectual aspiration that's still alive today as an aspiration.
link |
00:04:18.660
But I think this is akin to the development of chemical engineering from chemistry or
link |
00:04:22.480
electrical engineering from electromagnetism.
link |
00:04:25.920
So if you go back to the 30s or 40s, there wasn't yet chemical engineering.
link |
00:04:31.040
There was chemistry.
link |
00:04:32.040
There was fluid flow.
link |
00:04:33.040
There was mechanics and so on.
link |
00:04:35.600
But people pretty clearly viewed interesting goals to try to build factories that make
link |
00:04:41.280
chemicals products and do it viably, safely, make good ones, do it at scale.
link |
00:04:48.040
So people started to try to do that, of course, and some factories worked, some didn't.
link |
00:04:51.720
Some were not viable, some exploded.
link |
00:04:54.080
But in parallel, developed a whole field called chemical engineering.
link |
00:04:58.200
Chemical engineering is a field.
link |
00:04:59.560
It's no bones about it.
link |
00:05:00.960
It has theoretical aspects to it.
link |
00:05:02.560
It has practical aspects.
link |
00:05:04.720
It's not just engineering, quote, unquote.
link |
00:05:06.720
It's the real thing, real concepts are needed.
link |
00:05:09.640
Same thing with electrical engineering.
link |
00:05:11.680
There was Maxwell's equations, which in some sense were everything you know about electromagnetism.
link |
00:05:16.600
But you needed to figure out how to build circuits, how to build modules, how to put
link |
00:05:19.080
them together, how to bring electricity from one point to another safely and so on and
link |
00:05:22.840
so forth.
link |
00:05:23.840
So a whole field that developed called electrical engineering.
link |
00:05:26.040
I think that's what's happening right now is that we have a proto field, which is statistics,
link |
00:05:32.520
computer, more of the theoretical side of the algorithmic side of computer science.
link |
00:05:36.240
That was enough to start to build things.
link |
00:05:38.000
But what things?
link |
00:05:39.320
Systems that bring value to human beings and use human data and mix in human decisions.
link |
00:05:44.080
The engineering side of that is all ad hoc.
link |
00:05:47.600
That's what's emerging.
link |
00:05:48.600
In fact, if you want to call machine learning a field, I think that's what it is.
link |
00:05:51.520
That's a proto form of engineering based on statistical and computational ideas in previous
link |
00:05:55.520
generations.
link |
00:05:56.520
But do you think there's something deeper about AI in his dreams and aspirations as
link |
00:06:01.240
compared to chemical engineering and electrical engineering?
link |
00:06:03.840
Well, the dreams and aspirations may be, but those are 500 years from now.
link |
00:06:07.960
I think that that's like the Greek sitting there and saying, it would be neat to get
link |
00:06:10.480
to the moon someday.
link |
00:06:12.880
I think we have no clue how the brain does computation.
link |
00:06:16.160
We're just a clue.
link |
00:06:17.160
We're even worse than the Greeks on most anything interesting scientifically of our era.
link |
00:06:23.720
Can you linger on that just for a moment because you stand not completely unique, but a little
link |
00:06:29.080
bit unique in the clarity of that.
link |
00:06:31.640
Can you elaborate your intuition of where we stand in our understanding of the human
link |
00:06:37.120
brain?
link |
00:06:38.120
A lot of people say, and your scientists say, we're not very far in understanding human
link |
00:06:41.240
brain, but you're saying we're in the dark here.
link |
00:06:44.680
Well, I know I'm not unique.
link |
00:06:45.880
I don't even think in the clarity, but if you talk to real neuroscientists that really
link |
00:06:49.240
study real synapses or real neurons, they agree.
link |
00:06:52.720
They agree.
link |
00:06:53.720
It's 100 years, hundreds of year task, and they're building it up slowly, surely.
link |
00:06:58.640
What the signal is there is not clear.
link |
00:07:01.200
We have all of our metaphors.
link |
00:07:02.680
We think it's electrical, maybe it's chemical, it's a whole soup.
link |
00:07:06.760
It's ions and proteins, and it's a cell, and that's even around like a single synapse.
link |
00:07:10.880
If you look at an electron micrograph of a single synapse, it's a city of its own.
link |
00:07:16.040
That's one little thing on a dendritic tree, which is extremely complicated, electrochemical
link |
00:07:20.760
thing, and it's doing these spikes and voltages are flying around, and then proteins are taking
link |
00:07:25.720
that and taking it down into the DNA, and who knows what.
link |
00:07:29.480
It is the problem of the next few centuries.
link |
00:07:31.720
It is fantastic, but we have our metaphors about it.
link |
00:07:34.920
Is it an economic device?
link |
00:07:36.120
Is it like the immune system, or is it like a layered set of arithmetic computations?
link |
00:07:41.120
We have all these metaphors, and they're fun, but that's not real science per se.
link |
00:07:48.120
There is neuroscience.
link |
00:07:49.120
That's not neuroscience.
link |
00:07:51.360
That's like the Greek speculating about how to get to the moon, fun.
link |
00:07:55.440
I think that I like to say this fairly strongly, because I think a lot of young people think
link |
00:07:59.240
that we're on the verge, because a lot of people who don't talk about it clearly, let
link |
00:08:03.440
it be understood that, yes, we kind of, this is brain inspired, we're kind of close, breakthroughs
link |
00:08:08.480
are on the horizon, and unscrupulous people sometimes who need money for their labs.
link |
00:08:14.480
As I'm saying, unscrupulous, but people will oversell.
link |
00:08:17.160
I need money from a lab.
link |
00:08:18.640
I'm studying computational neuroscience.
link |
00:08:22.560
I'm going to oversell it, and so there's been too much of that.
link |
00:08:25.240
So I'll step into the gray area between metaphor and engineering with, I'm not sure if you're
link |
00:08:32.080
familiar with brain computer interfaces.
link |
00:08:35.520
So a company like Elon Musk has Neuralink that's working on putting electrodes into
link |
00:08:42.240
the brain and trying to be able to read both read and send electrical signals, just as
link |
00:08:46.640
you said, even the basic mechanism of communication in the brain is not something we understand.
link |
00:08:55.200
But do you hope, without understanding the fundamental principles of how the brain works,
link |
00:09:00.880
we'll be able to do something interesting at that gray area of metaphor?
link |
00:09:06.560
It's not my area.
link |
00:09:07.560
So I hope in the sense like anybody else hopes for some interesting things to happen from
link |
00:09:11.400
research, I would expect more something like Alzheimer's will get figured out from modern
link |
00:09:15.920
neuroscience.
link |
00:09:16.920
There's a lot of humans offering based on brain disease, and we throw things like lithium
link |
00:09:22.560
at the brain.
link |
00:09:23.560
It kind of works.
link |
00:09:24.560
It's a blue eye.
link |
00:09:25.880
That's not quite true, but mostly we don't know.
link |
00:09:28.240
And that's even just about the biochemistry of the brain and how it leads to mood swings
link |
00:09:31.920
and so on.
link |
00:09:33.080
How thought emerges from that.
link |
00:09:34.720
We're really, really completely dim.
link |
00:09:38.120
So that you might want to hook up electrodes and try to do some signal processing on that
link |
00:09:41.520
and try to find patterns, fine, by all means go for it.
link |
00:09:45.600
It's just not scientific at this point.
link |
00:09:48.720
So it's like kind of sitting in a satellite and watching the emissions from a city and
link |
00:09:53.200
trying to affirm things about the microeconomy, even though you don't have microeconomic concepts.
link |
00:09:56.920
I mean, it's really that kind of thing.
link |
00:09:59.160
And so yes, can you find some signals that do something interesting or useful?
link |
00:10:02.480
Can you control a cursor or a mouse with your brain?
link |
00:10:06.640
Yeah.
link |
00:10:07.640
Absolutely.
link |
00:10:08.640
And then I can imagine business models based on that and even medical applications of
link |
00:10:13.320
that.
link |
00:10:14.320
But from there to understanding algorithms that allow us to really tie in deeply from
link |
00:10:19.520
the brain to computer, I just, no, I don't agree with Elon Musk.
link |
00:10:22.560
I don't think that's even, that's not for our generation, it's not even for the century.
link |
00:10:26.560
So just in the hopes of getting you to dream, you've mentioned Komogorov and touring might
link |
00:10:33.560
pop up.
link |
00:10:36.200
Do you think that there might be breakthroughs that will get you to sit back in five, 10 years
link |
00:10:41.240
and say, wow.
link |
00:10:43.240
Oh, I'm sure there will be, but I don't think that there'll be demos that impress me.
link |
00:10:49.200
I don't think that having a computer call a restaurant and pretend to be a human is
link |
00:10:55.000
breakthrough and people, you know, some people presented as such.
link |
00:10:59.840
It's imitating human intelligence.
link |
00:11:01.640
It's even putting coughs in the thing to make a bit of a PR stunt.
link |
00:11:07.360
And so fine that the world runs on those things too.
link |
00:11:10.920
And I don't want to diminish all the hard work and engineering that goes behind things
link |
00:11:14.920
like that and the ultimate value to the human race.
link |
00:11:17.720
But that's not scientific understanding.
link |
00:11:20.480
And I know the people that work on these things, they are after scientific understanding, you
link |
00:11:23.680
know, in the meantime, they've got to kind of, you know, the trains got to run and they
link |
00:11:26.320
got mouths to feed and they got things to do.
link |
00:11:28.680
And there's nothing wrong with all that.
link |
00:11:30.440
I would call that though, just engineering.
link |
00:11:32.560
And I want to distinguish that between an engineering field like electrical engineering
link |
00:11:35.880
that originally that originally emerged that had real principles and you really know what
link |
00:11:39.480
you're doing and you had a little scientific understanding, maybe not even complete.
link |
00:11:43.640
So it became more predictable and it was really gave value to human life because it was understood.
link |
00:11:49.000
And so we don't want to muddle too much these waters of what we're able to do versus what
link |
00:11:54.720
we really can do in a way that's going to impress the next.
link |
00:11:58.040
So I don't need to be wowed, but I think that someone comes along in 20 years, a younger
link |
00:12:02.520
person who's absorbed all the technology and for them to be wowed, I think they have to
link |
00:12:08.360
be more deeply impressed.
link |
00:12:09.360
A young Kolmogorov would not be wowed by some of the stunts that you see right now coming
link |
00:12:13.000
from the big companies.
link |
00:12:14.000
The demos, but do you think the breakthroughs from Kolmogorov would be and give this question
link |
00:12:19.040
a chance?
link |
00:12:20.040
Do you think they'll be in the scientific fundamental principles arena?
link |
00:12:24.120
Or do you think it's possible to have fundamental breakthroughs in engineering?
link |
00:12:28.560
Meaning you know, I would say some of the things that Elon Musk is working with SpaceX
link |
00:12:33.200
and then others sort of trying to revolutionize the fundamentals of engineering of manufacturing
link |
00:12:38.800
of saying, here's a problem, we know how to do a demo of and actually taking it to scale.
link |
00:12:45.120
So there's going to be all kinds of breakthroughs.
link |
00:12:46.960
I just don't like that terminology.
link |
00:12:48.280
I'm a scientist and I work on things day in and day out and things move along and eventually
link |
00:12:52.000
say, wow, something happened, but I don't like that language very much.
link |
00:12:56.400
Also I don't like to prize theoretical breakthroughs over practical ones.
link |
00:13:01.000
I tend to be more of a theoretician and I think there's lots to do in that arena right now.
link |
00:13:06.760
And so I wouldn't point to the Kolmogorovs, I might point to the Edison's of the era
link |
00:13:09.640
and maybe Musk is a bit more like that.
link |
00:13:11.840
But you know, Musk, God bless him also, we'll say things about AI that he knows very little
link |
00:13:17.440
about and he doesn't know what he, he is, you know, it leads people astray when he talks
link |
00:13:21.680
about things he doesn't know anything about.
link |
00:13:23.800
Trying to program a computer to understand natural language, to be involved in a dialogue
link |
00:13:27.320
we're having right now, that can happen in our lifetime.
link |
00:13:30.440
You could fake it, you can mimic, sort of take old sentences that humans use and retread
link |
00:13:35.080
them with the deep understanding of language, no, it's not going to happen.
link |
00:13:38.560
And so from that, you know, I hope you can perceive that deeper, yet deeper kind of aspects
link |
00:13:42.920
and intelligence are not going to happen.
link |
00:13:44.480
Now will there be breakthroughs?
link |
00:13:45.480
You know, I think that Google was a breakthrough, I think Amazon is a breakthrough.
link |
00:13:49.240
You know, I think Uber is a breakthrough, you know, that bring value to human beings
link |
00:13:52.680
at scale in new brand new ways based on data flows and so on.
link |
00:13:56.840
A lot of these things are slightly broken because there's not a kind of a engineering
link |
00:14:01.240
field that takes economic value in context of data and, you know, planetary scale and
link |
00:14:06.720
worries about all the externalities, the privacy.
link |
00:14:09.520
You know, we don't have that field, so we don't think these things through very well.
link |
00:14:13.160
But I see that as emerging and that will be constant, that will, you know, looking back
link |
00:14:16.640
from a hundred years, that will be constantly a breakthrough in this era, just like electrical
link |
00:14:20.320
engineering was a breakthrough in the early part of the last century and chemical engineering
link |
00:14:23.520
was a breakthrough.
link |
00:14:24.520
So the scale, the markets that you talk about and we'll get to will be seen as sort of breakthrough
link |
00:14:30.360
and we're in very early days of really doing interesting stuff there.
link |
00:14:33.960
And we'll get to that, but it's just taking a quick step back.
link |
00:14:37.440
Can you give, kind of throw off the historian hat, I mean, you briefly said that in the
link |
00:14:45.640
history of AI kind of mimics the history of chemical engineering, but I keep saying machine
link |
00:14:50.000
learning, you keep wanting to say AI, just to let you know, I don't, you know, I resist
link |
00:14:53.640
that.
link |
00:14:54.640
I don't think this is about AI really was John McCarthy as almost a philosopher.
link |
00:14:59.600
Saying, wouldn't it be cool if we could put thought in a computer?
link |
00:15:03.560
If we could mimic the human capability to think or put intelligence in in some sense
link |
00:15:08.040
into a computer.
link |
00:15:09.960
That's an interesting philosophical question.
link |
00:15:12.120
And he wanted to make it more than philosophy.
link |
00:15:13.560
He wanted to actually write down logical formula and algorithms that would do that.
link |
00:15:17.360
And that is a perfectly valid reasonable thing to do.
link |
00:15:20.160
That's not what's happening in this era.
link |
00:15:23.080
So the reason I keep saying AI actually, and I'd love to hear what you think about it,
link |
00:15:27.760
machine learning has a has a very particular set of methods and tools.
link |
00:15:34.640
Maybe your version of it is that mine doesn't know it doesn't very, very open.
link |
00:15:37.720
It does optimization.
link |
00:15:38.720
It does sampling.
link |
00:15:39.720
It does.
link |
00:15:40.720
So systems that learn is what machine learning is systems that learn and make decisions and
link |
00:15:44.600
make decisions.
link |
00:15:45.600
So what is pattern recognition and, you know, finding patterns is all about making decisions
link |
00:15:50.040
in real worlds and having close feedback loops.
link |
00:15:52.560
So something like symbolic AI expert systems, reasoning systems, knowledge based representation
link |
00:15:57.640
and all of those kinds of things search.
link |
00:16:00.120
Does that neighbor fit into what you think of as machine learning?
link |
00:16:04.760
So I don't even like the word machine learning.
link |
00:16:06.120
I think that with the field you're talking about is all about making large collections
link |
00:16:09.560
of decisions under uncertainty by large collections of entities.
link |
00:16:12.640
Yes.
link |
00:16:13.640
Right.
link |
00:16:14.640
And there are principles for that at that scale.
link |
00:16:16.120
You don't have to say the principles are for a single entity that's making decisions,
link |
00:16:18.960
single agent or single human.
link |
00:16:20.560
It really immediately goes to the network of decisions.
link |
00:16:22.880
Is a good word for that?
link |
00:16:23.880
No, there's no good words for any of this.
link |
00:16:25.400
That's kind of part of the problem.
link |
00:16:27.240
So we can continue the conversation, use AI for all that.
link |
00:16:29.920
I just want to kind of raise our flag here that this is not about, we don't know what
link |
00:16:35.520
intelligence is and real intelligence.
link |
00:16:38.120
We don't know much about abstraction and reasoning at the level of humans.
link |
00:16:41.000
We don't have a clue.
link |
00:16:42.000
We're not trying to build that because we don't have a clue.
link |
00:16:44.680
Eventually it may emerge.
link |
00:16:45.680
They'll make, I don't know if they'll be breakthroughs, but eventually we'll start to get glimmers
link |
00:16:49.040
of that.
link |
00:16:50.160
It's not what's happening right now.
link |
00:16:51.400
Okay.
link |
00:16:52.400
We're taking data.
link |
00:16:53.400
We're trying to make good decisions based on that.
link |
00:16:54.400
We're trying to do a scale.
link |
00:16:55.400
We're trying to do it economically, viably.
link |
00:16:56.760
We're trying to build markets.
link |
00:16:58.240
We're trying to keep value at that scale.
link |
00:17:01.040
And aspects of this will look intelligent.
link |
00:17:03.720
They will look, computers were so dumb before, they will see more intelligent.
link |
00:17:08.120
We will use that buzzword of intelligence.
link |
00:17:09.840
So we can use it in that sense, but you know, so machine learning, you can scope it narrowly
link |
00:17:14.780
is just learning from data and pattern recognition.
link |
00:17:17.960
But whatever I, when I talk about these topics, I maybe data science is another word you could
link |
00:17:21.960
throw in the mix.
link |
00:17:23.680
It really is important that the decisions are, as part of it, it's consequential decisions
link |
00:17:28.040
in the real world.
link |
00:17:29.040
Am I going to have a medical operation?
link |
00:17:30.400
Am I going to drive down the street, you know, things that were, they're scarcity, things
link |
00:17:35.240
that impact other human beings or other, you know, the environment and so on.
link |
00:17:39.400
How do I do that based on data?
link |
00:17:40.800
How do I do that?
link |
00:17:41.800
I definitely how do I use computers to help those kinds of things go forward, whatever
link |
00:17:44.160
you want to call that.
link |
00:17:45.640
So let's call it AI.
link |
00:17:46.640
Let's agree to call it AI, but it's, let's, let's not say that what the goal of that
link |
00:17:51.640
is, is intelligence, the goal of that is really good working systems at planetary scale that
link |
00:17:55.600
we've never seen before.
link |
00:17:56.600
So reclaim the word AI from the Dartmouth conference from many decades ago of the dream
link |
00:18:00.800
of human.
link |
00:18:01.800
I don't want to reclaim it.
link |
00:18:02.800
I want a new word.
link |
00:18:03.800
I think it was a bad choice.
link |
00:18:04.800
I mean, I, you know, I, if you read one of my little things, the history was basically
link |
00:18:08.240
that McCarthy needed a new name because cybernetics already existed.
link |
00:18:12.400
And he didn't like, you know, no one really liked Norbert Wiener.
link |
00:18:15.280
Norbert Wiener was kind of an island to himself.
link |
00:18:17.400
And he felt that he had encompassed all this and in some sense he did.
link |
00:18:21.200
If you look at the language of cybernetics, it was everything we're talking about.
link |
00:18:24.400
It was control theory and single processing and some notions of intelligence and close
link |
00:18:28.160
feedback loops and data.
link |
00:18:29.360
It was all there.
link |
00:18:30.960
It's just not a word that lived on partly because of the, maybe the personalities.
link |
00:18:34.240
But McCarthy needed a new word to say, I'm different from you.
link |
00:18:36.720
I'm not part of your, your show.
link |
00:18:38.360
I got my own invented this word.
link |
00:18:41.720
And again, as a kind of a thinking forward about the movies that would be made about
link |
00:18:46.160
it, it was a great choice, but thinking forward about creating a sober academic and
link |
00:18:51.000
real world discipline.
link |
00:18:52.000
It was a terrible choice because it led to promises that are not true, that we understand.
link |
00:18:56.280
We understand artificial perhaps, but we don't understand intelligence.
link |
00:18:59.320
It's a small tangent because you're one of the great personalities of machine learning,
link |
00:19:03.320
whatever the heck you call the field.
link |
00:19:05.120
The, do you think science progresses by personalities or by the fundamental principles and theories
link |
00:19:11.800
and research that's outside of personality?
link |
00:19:15.040
Yeah, both.
link |
00:19:16.040
And I wouldn't say there should be one kind of personality.
link |
00:19:17.520
I have mine and I have my preferences and I have a kind of network around me that feeds
link |
00:19:23.200
me and some of them agree with me and some disagree, but you know, all kinds of personalities
link |
00:19:26.680
are needed.
link |
00:19:28.480
Right now, I think the personality that it's a little too exuberant, a little bit too ready
link |
00:19:31.680
to promise the moon is, is a little bit too much in ascendance.
link |
00:19:35.840
And I do, I do think that that's, there's some good to that.
link |
00:19:38.160
It certainly attracts lots of young people to our field, but a lot of those people come
link |
00:19:41.560
in with strong misconceptions and they have to then unlearn those and then find something
link |
00:19:46.040
in, you know, to do.
link |
00:19:48.880
And so I think there's just got to be some, you know, multiple voices and there's, I didn't,
link |
00:19:52.160
I wasn't hearing enough of the more sober voice.
link |
00:19:54.840
So as a continuation of a fun tangent and speaking of vibrant personalities, what would
link |
00:20:02.160
you say is the most interesting disagreement you have with Jan Lacoon?
link |
00:20:07.400
So Jan's an old friend and I just say that I don't think we disagree about very much
link |
00:20:12.480
really.
link |
00:20:13.480
He and I both kind of have a let's build that kind of mentality and does it work and kind
link |
00:20:18.760
of mentality and kind of concrete.
link |
00:20:21.360
We both speak French and we speak French more together and we have, we have a lot, a lot
link |
00:20:24.880
in common.
link |
00:20:27.120
And so, you know, if one wanted to highlight a disagreement, it's not really a fundamental.
link |
00:20:31.760
When I think it's just kind of where we're emphasizing, Jan has emphasized pattern recognition
link |
00:20:38.320
and has emphasized prediction.
link |
00:20:40.920
All right.
link |
00:20:41.920
So, you know, and it's interesting to try to take that as far as you can.
link |
00:20:45.360
If you could do perfect prediction, what would that give you kind of as a thought experiment?
link |
00:20:50.640
And I think that's way too limited.
link |
00:20:55.220
We cannot do perfect prediction.
link |
00:20:56.680
We will never have the data sets that allow me to figure out what you're about ready to
link |
00:20:59.320
do, what question you're going to ask next.
link |
00:21:00.760
I have no clue.
link |
00:21:01.760
I will never know such things.
link |
00:21:03.280
Moreover, most of us find ourselves during the day in all kinds of situations we had
link |
00:21:07.560
no anticipation of that are kind of various, their novel in various ways.
link |
00:21:13.440
And in that moment, we want to think through what we want.
link |
00:21:16.320
And also, there's going to be market forces acting on us.
link |
00:21:18.920
I'd like to go down that street, but now it's full because there's a crane in the street.
link |
00:21:22.280
I got it.
link |
00:21:23.280
I got to think about that.
link |
00:21:24.280
I got to think about what I might really want here.
link |
00:21:26.240
And I got to sort of think about how much it costs me to do this action versus this
link |
00:21:29.080
action.
link |
00:21:30.080
I got to think about the risks involved.
link |
00:21:32.280
You know, a lot of our current pattern recognition and prediction systems don't do any risk evaluations.
link |
00:21:36.680
They have no error bars, right?
link |
00:21:38.960
I got to think about other people's decisions around me.
link |
00:21:41.040
I got to think about a collection of my decisions, even just thinking about like a medical treatment.
link |
00:21:45.560
You know, I'm not going to take the prediction of a neural net about my health, about something
link |
00:21:50.400
consequential.
link |
00:21:51.400
I'm not about ready to have a heart attack because some number is over.7.
link |
00:21:54.560
Even if you had all the data in the world, they've ever been collected about heart attacks
link |
00:21:58.840
better than any doctor ever had.
link |
00:22:00.880
I'm not going to trust the output of that neural net to predict my heart attack.
link |
00:22:03.840
I'm going to want to ask what if questions around that.
link |
00:22:06.400
I'm going to want to look at some other possible data I didn't have.
link |
00:22:09.800
Causal things.
link |
00:22:10.800
I'm going to want to have a dialogue with a doctor about things we didn't think about
link |
00:22:13.680
when we gathered the data.
link |
00:22:15.480
You know, I could go on and on.
link |
00:22:16.640
I hope you can see.
link |
00:22:17.880
And I don't, I think that if you say predictions, everything that, that, that you're missing
link |
00:22:21.520
all of this stuff.
link |
00:22:23.480
And so prediction plus decision making is everything, but both of them are equally important.
link |
00:22:28.240
And so the field has emphasized prediction.
link |
00:22:30.120
Yon rightly so has seen how powerful that is.
link |
00:22:33.640
But at the cost of people not being aware of the decision making is where the rubber
link |
00:22:37.040
really hits the road, where human lives are at stake, where risks are being taken, where
link |
00:22:41.240
you got to gather more data.
link |
00:22:42.240
You got to think about the air bars.
link |
00:22:43.640
You got to think about the consequences of your decisions on others.
link |
00:22:45.880
You got about the economy around your decisions, blah, blah, blah, blah.
link |
00:22:48.960
I'm not the only one working on those, but we're a smaller tribe.
link |
00:22:52.120
And right now we're not the one that people talk about the most.
link |
00:22:56.400
But you know, if you go out in the real world and industry, you know, at Amazon, I'd say
link |
00:23:00.480
half the people there are working on decision making and the other half are doing, you know,
link |
00:23:03.720
the pattern recognition.
link |
00:23:04.720
It's important.
link |
00:23:05.720
And the words of pattern recognition and prediction, I think the distinction there, not to linger
link |
00:23:10.200
on words, but the distinction there is more a constrained sort of in the lab data set
link |
00:23:16.160
versus decision making is talking about consequential decisions in the real world under the messiness
link |
00:23:21.160
and the uncertainty of the real world.
link |
00:23:23.680
And just the whole of it, the whole mess of it that actually touches human beings and
link |
00:23:27.320
scale, like you said, market forces, that's the distinction.
link |
00:23:31.080
It helps add those, that perspective, that broader perspective.
link |
00:23:33.800
You're right.
link |
00:23:34.800
I totally agree.
link |
00:23:35.800
On the other hand, if you're a real prediction person, of course you want it to be in the
link |
00:23:38.120
real world.
link |
00:23:39.120
You want to predict real world events.
link |
00:23:40.120
I'm just saying that's not possible with just data sets, that it has to be in the context
link |
00:23:44.280
of, you know, strategic things that someone's doing, data they might gather, things they
link |
00:23:48.480
could have gathered, the reasoning process around data.
link |
00:23:50.880
It's not just taking data and making predictions based on the data.
link |
00:23:53.580
So one of the things that you're working on, I'm sure there's others working on it, but
link |
00:23:58.280
I don't hear often it talked about, especially in the clarity that you talk about it.
link |
00:24:04.960
And I think it's both the most exciting and the most concerning area of AI in terms of
link |
00:24:10.000
decision making.
link |
00:24:11.000
So you've talked about AI systems that help make decisions that scale in a distributed
link |
00:24:15.280
way, millions, billions of decisions, sort of markets of decisions.
link |
00:24:19.760
Can you, as a starting point, sort of give an example of a system that you think about
link |
00:24:24.880
when you're thinking about these kinds of systems?
link |
00:24:27.040
Yeah.
link |
00:24:28.040
So first of all, you're absolutely getting into some territory, which I will be beyond
link |
00:24:31.400
my expertise.
link |
00:24:32.400
And there are lots of things that are going to be very not obvious to think about.
link |
00:24:35.680
Just like, again, I like to think about history a little bit, but think about, put yourself
link |
00:24:39.880
back in the 60s.
link |
00:24:40.880
There was kind of a banking system that wasn't computerized really.
link |
00:24:43.360
There was database theory emerging.
link |
00:24:46.160
And database people had to think about, how do I actually not just move data around, but
link |
00:24:49.160
actual money, and have it be valid, and have transactions at ATMs happen that are actually
link |
00:24:55.400
all valid, and so on and so forth.
link |
00:24:57.840
So that's the kind of issues you get into when you start to get serious about things
link |
00:25:01.760
like this.
link |
00:25:02.760
I like to think about as kind of almost a thought experiment to help me think something
link |
00:25:07.240
simpler, which is the music market, because to first order, there is no music market in
link |
00:25:15.360
the world right now in our country, for sure.
link |
00:25:18.760
There are things called record companies, and they make money, and they prop up a few
link |
00:25:25.200
really good musicians, and make them superstars, and they all make huge amounts of money.
link |
00:25:31.000
But there's a long tale of huge numbers of people that make lots and lots of really
link |
00:25:33.680
good music that is actually listened to by more people than the famous people.
link |
00:25:40.560
They are not in a market.
link |
00:25:41.560
They cannot have a career.
link |
00:25:42.800
They do not make money.
link |
00:25:43.880
The creators.
link |
00:25:44.880
The creators.
link |
00:25:45.880
The creators.
link |
00:25:46.880
The so called influencers or whatever.
link |
00:25:47.880
The managers who they are, right?
link |
00:25:49.320
So there are people who make extremely good music, especially in the hip hop or Latin
link |
00:25:53.360
world these days.
link |
00:25:55.240
They do it on their laptop.
link |
00:25:56.320
That's what they do on the weekend, and they have another job during the week, and they
link |
00:26:01.040
put it up on SoundCloud or other sites.
link |
00:26:03.360
Eventually, it gets streamed.
link |
00:26:04.720
It down gets turned into bits.
link |
00:26:06.120
It's not economically valuable.
link |
00:26:07.680
The information is lost.
link |
00:26:08.960
It gets put up.
link |
00:26:09.960
There are people stream it.
link |
00:26:11.560
You walk around in a big city.
link |
00:26:13.760
You see people with headphones all, you know, especially young kids listening to music all
link |
00:26:16.360
the time.
link |
00:26:17.360
Especially the data, none of them, very little of the music they listen to is the famous
link |
00:26:21.080
people's music.
link |
00:26:22.080
And none of it's old music.
link |
00:26:23.080
It's all the latest stuff.
link |
00:26:24.920
But the people who made that latest stuff are like some 16 year old somewhere who will
link |
00:26:27.480
never make a career out of this, who will never make money.
link |
00:26:29.640
Of course, there will be a few counter examples.
link |
00:26:31.480
The record companies incentivize to pick out a few and highlight them.
link |
00:26:35.360
Long story short, there's a missing market there.
link |
00:26:37.720
There is not a consumer producer relationship at the level of the actual creative acts.
link |
00:26:43.480
The pipelines and spotifies of the world that take this stuff and stream it along, they
link |
00:26:48.120
make money off of subscriptions or advertising and those things.
link |
00:26:51.200
They're making the money, right?
link |
00:26:52.200
And then they will offer bits and pieces of it to a few people again to highlight that,
link |
00:26:56.080
you know, they're the simulator market.
link |
00:26:58.560
Anyway, a real market would be if you're a creator of music that you actually are somebody
link |
00:27:03.560
who's good enough that people want to listen to you.
link |
00:27:06.320
You should have the data available to you.
link |
00:27:07.840
There should be a dashboard showing a map of the United States.
link |
00:27:11.480
So in last week, here's all the places your songs were listened to.
link |
00:27:14.680
It should be transparent, vettable so that if someone down in Providence sees that you're
link |
00:27:20.520
being listened to 10,000 times in Providence, that they know that's real data.
link |
00:27:24.160
You know it's real data.
link |
00:27:25.280
They will have you come give a show down there.
link |
00:27:27.280
They will broadcast to the people who've been listening to you that you're coming.
link |
00:27:30.000
If you do this right, you could, you could, you know, go down there and make $20,000.
link |
00:27:34.480
You do that three times a year, you start to have a career.
link |
00:27:37.080
So in this sense, AI creates jobs.
link |
00:27:39.560
It's not about taking away human jobs.
link |
00:27:40.680
It's creating new jobs because it creates a new market.
link |
00:27:43.480
Once you've created a market, you've now connected up producers and consumers.
link |
00:27:46.560
You know, the person who's making the music can say to someone who comes to their shows
link |
00:27:49.600
a lot, hey, I'll play your daughter's wedding for $10,000.
link |
00:27:53.200
You'll say $8,000.
link |
00:27:54.200
They'll say $9,000.
link |
00:27:55.200
Then you, again, you can now get an income up to $100,000.
link |
00:27:59.000
You're not going to be a millionaire, all right.
link |
00:28:01.920
And now even think about really the value of music is in these personal connections, even
link |
00:28:07.080
so much so that a young kid wants to wear a T shirt with their favorite musician's signature
link |
00:28:13.680
on it, right?
link |
00:28:14.840
So if they listen to the music on the internet, the internet should be able to provide them
link |
00:28:18.080
with a button that they push and the merchandise arrives the next day.
link |
00:28:21.840
We can do that, right?
link |
00:28:23.000
And now why should we do that?
link |
00:28:24.400
Well, because the kid who bought the shirt will be happy, but more the person who made
link |
00:28:27.600
the music will get the money.
link |
00:28:29.040
There's no advertising needed, right?
link |
00:28:32.360
So you can create markets between producers and consumers, take 5% cut.
link |
00:28:36.480
Your company will be perfectly sound, it'll go forward into the future, and it will create
link |
00:28:41.120
new markets and that raises human happiness.
link |
00:28:45.080
Now this seems like it was easy, just create this dashboard, kind of create some connections
link |
00:28:48.720
and all that.
link |
00:28:49.720
But if you think about Uber or whatever, you think about the challenges in the real world
link |
00:28:52.920
of it doing things like this, and there are actually new principles going to be needed.
link |
00:28:56.200
You're trying to create a new kind of two way market at a different scale that's ever
link |
00:28:59.080
been done before.
link |
00:29:00.080
There's going to be unwanted aspects of the market, there'll be bad people, there'll
link |
00:29:05.760
be the data will get used in the wrong ways, it'll fail in some ways, it won't deliver
link |
00:29:11.080
about, you have to think that through.
link |
00:29:12.640
Just like anyone who ran a big auction or ran a big matching service in economics will
link |
00:29:17.200
think these things through.
link |
00:29:18.840
And so that maybe didn't get at all the huge issues that can arise when you start to create
link |
00:29:22.120
markets, but it starts for me solidify my thoughts and allow me to move forward in my
link |
00:29:27.320
own thinking.
link |
00:29:28.320
Yeah, so I talked to, had a research at Spotify actually, I think their long term goal they've
link |
00:29:32.840
said is to have at least one million creators make a comfortable living putting on Spotify.
link |
00:29:41.120
So I think you articulate a really nice vision of the world and the digital and the cyberspace
link |
00:29:52.160
of markets.
link |
00:29:53.920
What do you think companies like Spotify or YouTube or Netflix can do to create such
link |
00:30:02.720
markets?
link |
00:30:04.080
Is it an AI problem?
link |
00:30:05.400
Is it an interface problems or interface design?
link |
00:30:08.560
Is it some other kind of, is it an economics problem?
link |
00:30:13.400
Who should they hire to solve these problems?
link |
00:30:15.600
Well, part of it's not just top down.
link |
00:30:17.480
So the Silicon Valley has this attitude that they know how to do it.
link |
00:30:20.000
They will create the system just like Google did with the search box that will be so good
link |
00:30:23.360
that they'll just everyone will adopt that.
link |
00:30:26.960
It's not, it's everything you said, but really I think missing the kind of culture.
link |
00:30:31.480
So it's literally that 16 year old who's, who's able to create the songs.
link |
00:30:34.800
You don't create that as a Silicon Valley entity.
link |
00:30:37.000
You don't hire them per se, right?
link |
00:30:39.360
You have to create an ecosystem in which they are wanted and that they're belong, right?
link |
00:30:44.360
And so you have to have some culture credibility to do things like this, you know, Netflix
link |
00:30:48.160
to their credit wanted some of that credibility and they created shows, you know, content.
link |
00:30:53.040
They call it content.
link |
00:30:54.040
It's such a terrible word, but it's culture, right?
link |
00:30:56.880
And so with movies, you can kind of go give a large sum of money to somebody graduate
link |
00:31:01.160
from the USC film school.
link |
00:31:03.480
It's a whole thing of its own, but it's kind of like rich white people's thing to do,
link |
00:31:07.360
you know, and you know, American culture has not been so much about rich white people.
link |
00:31:11.800
It's been about all the immigrants, all the, you know, the Africans who came and brought
link |
00:31:15.840
that culture and those, those rhythms and that to this world and created this whole
link |
00:31:21.600
new thing, you know, American culture.
link |
00:31:24.040
And so companies can't artificially create that.
link |
00:31:26.840
They can't just say, Hey, we're here.
link |
00:31:28.480
We're going to buy it up.
link |
00:31:29.480
You got a partner, right?
link |
00:31:31.440
And so, but anyway, you know, not to denigrate these companies are all trying and they should
link |
00:31:35.720
and they, they are, I'm sure they're asking these questions and some of them are even
link |
00:31:39.480
making an effort, but it is partly a respect the culture as you were a, as a technology
link |
00:31:44.160
person, you got to blend your technology with cultural, with cultural, you know, meaning.
link |
00:31:49.880
How much of a role do you think the algorithm machine learning has in connecting the consumer
link |
00:31:55.040
to the creator sort of the recommender system aspect of this?
link |
00:31:59.560
Yeah.
link |
00:32:00.560
It's a great question.
link |
00:32:01.560
I think pretty high recommend, you know, there's no magic in the algorithms, but a good
link |
00:32:05.280
recommender system is way better than a bad recommender system and recommender systems
link |
00:32:10.360
is a billion dollar industry back even, you know, 10, 20 years ago.
link |
00:32:15.200
And it continues to be extremely important going forward.
link |
00:32:17.520
What's your favorite recommender system just so we can put something well, just historically
link |
00:32:20.680
I was one of the, you know, when I first went to Amazon, you know, I first didn't like
link |
00:32:23.920
Amazon because they put the book people are out of business or the library, you know,
link |
00:32:27.160
the local booksellers went out of business.
link |
00:32:30.360
I've come to accept that there, you know, there probably are more books being sold now
link |
00:32:33.920
and poor people reading them than ever before.
link |
00:32:36.920
And then local book stores are coming back.
link |
00:32:39.440
So you know, that's how economics sometimes work.
link |
00:32:41.520
You go up and you go down.
link |
00:32:44.280
But anyway, when I finally started going there and I bought a few books, I was really pleased
link |
00:32:48.760
to see another few books being recommended to me that I never would have thought of.
link |
00:32:52.400
And I bought a bunch of them, so they obviously had a good business model, but I learned things
link |
00:32:55.960
and I still to this day kind of browse using that service.
link |
00:33:00.960
And I think lots of people get a lot, you know, that is a good aspect of a recommendation
link |
00:33:05.480
system.
link |
00:33:06.480
I'm learning from my peers in an indirect way.
link |
00:33:10.480
And their algorithms are not meant to have them impose what we learn.
link |
00:33:13.880
It really is trying to find out what's in the data.
link |
00:33:16.680
It doesn't work so well for other kind of entities, but that's just the complexity of human life
link |
00:33:20.080
like shirts.
link |
00:33:21.080
I'm not going to get recommendations on shirts, but that's interesting.
link |
00:33:26.440
If you try to recommend restaurants, it's hard.
link |
00:33:32.160
It's hard to do it at scale.
link |
00:33:35.400
But a blend of recommendation systems with other economic ideas, matchings and so on
link |
00:33:42.080
is really, really still very open, research wise, and there's new companies that are going
link |
00:33:46.080
to emerge that do that well.
link |
00:33:47.920
What do you think was going to the messy, difficult land of, say, politics and things
link |
00:33:53.840
like that that YouTube and Twitter have to deal with in terms of recommendation systems?
link |
00:33:58.440
Being able to suggest, I think Facebook just launched Facebook News, so having recommend
link |
00:34:05.600
the kind of news that are most likely for you to be interesting.
link |
00:34:08.840
Do you think this is AI solvable, again, whatever term you want to use, do you think it's a
link |
00:34:14.000
solvable problem for machines or is it a deeply human problem that's unsolvable?
link |
00:34:18.800
I don't even think about it at that level.
link |
00:34:20.280
I think that what's broken with some of these companies, it's all monetization by advertising.
link |
00:34:25.400
They're not at least Facebook.
link |
00:34:27.000
I want to critique them.
link |
00:34:28.160
They didn't really try to connect a producer and a consumer in an economic way.
link |
00:34:32.720
No one wants to pay for anything.
link |
00:34:34.960
They all started with Google and Facebook.
link |
00:34:37.280
They went back to the playbook of the television companies back in the day.
link |
00:34:41.640
Everyone wanted to pay for this signal, they will pay for the TV box, but not for the signal,
link |
00:34:45.520
at least back in the day.
link |
00:34:48.000
Advertising kind of filled that gap and advertising was new and interesting and it somehow didn't
link |
00:34:51.000
take over our lives quite.
link |
00:34:54.400
Fast forward, Google provides a service that people don't want to pay for.
link |
00:35:01.080
Somewhat surprisingly in the 90s, they ended up making huge amounts to the corner of the
link |
00:35:04.760
advertising market.
link |
00:35:05.760
It didn't seem like that was going to happen, at least to me.
link |
00:35:08.400
These little things on the right hand side of the screen just did not seem all that economically
link |
00:35:11.600
interesting, but companies had maybe no other choice.
link |
00:35:14.360
The TV market was going away and billboards and so on.
link |
00:35:18.280
They got it.
link |
00:35:19.920
I think that, sadly, that Google was doing so well with that and making it so much more.
link |
00:35:24.960
They didn't think much more about how, wait a minute, is there a producer or consumer
link |
00:35:28.160
relationship to be set up here, not just between us and the advertisers market to be created?
link |
00:35:32.840
Is there an actual market between the producer and consumer?
link |
00:35:34.840
There, the producer is the person who created that video clip.
link |
00:35:37.320
The person that made that website, the person who couldn't make more such things, the person
link |
00:35:40.880
who could adjust it as a function of demand, the person on the other side who's asking
link |
00:35:45.360
for different kinds of things.
link |
00:35:47.720
You see glimmers of that now, there's influencers and there's a little glimmering of a market,
link |
00:35:51.960
but it should have been done 20 years ago, it should have been thought about.
link |
00:35:54.400
It should have been created in parallel with the advertising ecosystem.
link |
00:35:58.440
Then Facebook inherited that and I think they also didn't think very much about that.
link |
00:36:03.800
Fast forward and now they are making huge amounts of money off of advertising and the
link |
00:36:08.200
news thing and all these clicks is just feeding the advertising.
link |
00:36:11.560
It's all connected up to the advertising.
link |
00:36:13.760
You want more people to click on certain things because that money flows to you, Facebook.
link |
00:36:18.600
You're very much incentivized to do that and when you start to find it's breaking, people
link |
00:36:22.480
are telling you, well, we're getting into some troubles, you try to adjust it with your
link |
00:36:25.280
smart AI algorithms and figure out what are bad clicks though, maybe shouldn't be clicked
link |
00:36:29.840
through the radar.
link |
00:36:30.840
I find that pretty much hopeless, it does get into all the complexity of human life
link |
00:36:35.880
and you can try to fix it, you should, but you could also fix the whole business model
link |
00:36:40.600
and the business model is that really, what are, are there some human producers and consumers
link |
00:36:44.560
out there?
link |
00:36:45.560
Is there some economic value to be liberated by connecting them directly?
link |
00:36:48.720
Is it such that it's so valuable that people will be going to pay for it?
link |
00:36:52.560
All right.
link |
00:36:53.560
Like micro payment, like small payment.
link |
00:36:55.160
Micro, but even after you micro, so I like the example, suppose I'm going, next week
link |
00:36:59.040
I'm going to India, never been to India before, right?
link |
00:37:01.960
I have a couple of days in, in Mumbai, I have no idea what to do there, right?
link |
00:37:06.880
And I could go on the web right now and search, it's going to be kind of hopeless.
link |
00:37:10.040
I'm not going to find, you know, I'll have lots of advertisers in my face, right?
link |
00:37:14.840
What I really want to do is broadcast to the world that I am going to Mumbai and have someone
link |
00:37:19.280
on the other side of a market look at me and, and there's a recommendation system there.
link |
00:37:23.960
So they're not looking at all possible people coming to Mumbai, they're looking at the people
link |
00:37:26.640
who are relevant to them.
link |
00:37:27.640
So someone in my age group, someone who kind of knows me in some level, I give up a little
link |
00:37:32.480
privacy by that.
link |
00:37:33.480
But I'm happy because what I'm going to get back is this person can make a little video
link |
00:37:36.120
for me or they're going to write a little two page paper on, here's the cool things
link |
00:37:39.880
that you want to do and move by this week, especially, right?
link |
00:37:43.160
I'm going to look at that.
link |
00:37:44.160
I'm not going to pay a micro payment.
link |
00:37:45.160
I'm going to pay, you know, a hundred dollars or whatever for that.
link |
00:37:47.960
It's real value.
link |
00:37:48.960
It's like journalism.
link |
00:37:49.960
And as an odd subscription, it's that I'm going to pay that person in that moment.
link |
00:37:54.880
I mean, it's going to take 5% of that and that person has now got it.
link |
00:37:57.760
It's a gig economy, if you will, but, you know, done for it, you know, thinking about
link |
00:38:01.000
a little bit behind YouTube, there was actually people who could make more of those things.
link |
00:38:05.000
If they were connected to a market, they would make more of those things independently.
link |
00:38:07.960
You don't have to tell them what to do.
link |
00:38:08.960
You don't have to incentivize them in any other way.
link |
00:38:12.680
And so yeah, these companies, I don't think have thought long, long and heard about that.
link |
00:38:15.760
So I do distinguish on, you know, Facebook on the one side who just not thought about
link |
00:38:19.840
these things at all.
link |
00:38:20.840
They were thinking that AI will fix everything and Amazon thinks about them all the time
link |
00:38:25.200
because they were already out in the real world.
link |
00:38:26.520
They were delivering packages to people's doors.
link |
00:38:28.040
They were worried about a market.
link |
00:38:29.400
They were worried about sellers and, you know, they worry and some things they do are great.
link |
00:38:32.600
Some things maybe not so great, but, you know, they're in that business model.
link |
00:38:36.440
And then I'd say Google sort of hovers somewhere in between.
link |
00:38:38.480
I don't think for a long, long time they got it.
link |
00:38:41.440
I think they probably see that YouTube is more pregnant with possibility than they might
link |
00:38:46.640
have thought and that they're probably heading that direction.
link |
00:38:49.920
But you know, Silicon Valley has been dominated by the Google, Facebook kind of mentality
link |
00:38:54.000
and the subscription and advertising and that is, that's the core problem, right?
link |
00:38:58.800
The fake news actually rides on top of that because it means that you're monetizing with
link |
00:39:03.640
clip through rate and that is the core problem.
link |
00:39:05.600
You got to remove that.
link |
00:39:06.880
So advertisement, if we're going to linger on that, I mean, that's an interesting thesis.
link |
00:39:11.200
I don't know if everyone really deeply thinks about that.
link |
00:39:15.080
So you're right.
link |
00:39:16.720
The thought is the advertisement model is the only thing we have, the only thing we'll
link |
00:39:20.720
ever have.
link |
00:39:21.720
So we have to fix, we have to build algorithms that despite that business model, you know,
link |
00:39:30.200
find the better angels of our nature and do good by society and by the individual.
link |
00:39:34.680
But you think we can slowly, you think, first of all, there's a difference between should
link |
00:39:40.000
and could.
link |
00:39:41.000
So you're saying we should slowly move away from the advertising model and have a direct
link |
00:39:46.560
connection between the consumer and the creator.
link |
00:39:49.920
The question I also have is, can we, because the advertising model is so successful now
link |
00:39:55.240
in terms of just making a huge amount of money and therefore being able to build a big company
link |
00:40:00.400
that provides, has really smart people working that create a good service.
link |
00:40:04.000
Do you think it's possible?
link |
00:40:05.680
And just to clarify, you think we should move away?
link |
00:40:07.920
Well, I think we should.
link |
00:40:08.920
Yeah.
link |
00:40:09.920
But we is the, you know, me.
link |
00:40:10.920
Society.
link |
00:40:11.920
Yeah.
link |
00:40:12.920
Well, the companies.
link |
00:40:13.920
I mean, so first of all, full disclosure.
link |
00:40:15.280
I'm doing a day a week at Amazon because I kind of want to learn more about how they
link |
00:40:18.320
do things.
link |
00:40:19.320
So, you know, I'm not speaking for Amazon in any way, but, you know, I did go there
link |
00:40:22.720
because I actually believe they get a little bit of this or trying to create these markets.
link |
00:40:26.200
And they don't really use, advertising is not a crucial part of Amazon.
link |
00:40:29.640
That's a good question.
link |
00:40:30.640
So it has become not crucial, but it's become more and more present if you go to Amazon
link |
00:40:34.640
website.
link |
00:40:35.640
And, you know, without revealing too many deep secrets about Amazon, I can tell you
link |
00:40:38.800
that, you know, a lot of people come to question this and there's a huge questioning going
link |
00:40:42.720
on.
link |
00:40:43.720
You do not want a world where there's zero advertising.
link |
00:40:45.640
That actually is a bad world.
link |
00:40:47.200
Okay.
link |
00:40:48.200
So here's the way to think about it.
link |
00:40:49.320
You're a company that like Amazon is trying to bring products to customers, right?
link |
00:40:55.000
And the customer and you get more, you want to buy a vacuum cleaner, say, you want to
link |
00:40:58.440
know what's available for me.
link |
00:40:59.440
And, you know, it's not going to be that obvious.
link |
00:41:00.840
You have to do a little bit of work at it.
link |
00:41:02.160
The recommendation system will sort of help, right?
link |
00:41:04.600
But now suppose this other person over here has just made the world, you know, they spend
link |
00:41:08.080
a huge amount of energy.
link |
00:41:09.080
They had a great idea.
link |
00:41:10.080
They made a great vacuum cleaner.
link |
00:41:11.080
They know they really did it.
link |
00:41:12.400
They nailed it.
link |
00:41:13.400
They, you know, whiz kid that made a great new vacuum cleaner, right?
link |
00:41:16.680
It's not going to be in the recommendation system.
link |
00:41:18.240
No one will know about it.
link |
00:41:19.240
The algorithms will not find it and AI will not fix that.
link |
00:41:22.360
Okay.
link |
00:41:23.360
At all.
link |
00:41:24.360
Right.
link |
00:41:25.360
How do you allow that vacuum cleaner to start to get in front of people?
link |
00:41:28.960
Be sold.
link |
00:41:29.960
Well, advertising and here what advertising is, it's a signal that you're, you believe in
link |
00:41:34.480
your product enough that you're willing to pay some real money for it.
link |
00:41:37.480
And to me as a consumer, I look at that signal, I say, well, first of all, I know these are
link |
00:41:41.480
not just cheap little ads because we have now right now that I know that, you know, these
link |
00:41:44.960
are super cheap, you know, pennies.
link |
00:41:47.760
If I see an ad where it's actually, I know the company is only doing a few of these and
link |
00:41:51.120
they're making real money is kind of flowing and I see an ad, I may pay more attention
link |
00:41:54.920
to it.
link |
00:41:55.920
And I actually might want that because I see, Hey, that guy spent money on his vacuum cleaner.
link |
00:42:01.600
Maybe there's something good there.
link |
00:42:02.600
So I will look at it.
link |
00:42:03.600
And so that's part of the overall information flow in a good market.
link |
00:42:06.600
So advertising has a role.
link |
00:42:09.440
But the problem is, of course, that that signal is now completely gone because it just, you
link |
00:42:13.640
know, dominated by these tiny little things that add up to big money for the company.
link |
00:42:17.400
You know, so I think it will just, I think it will change because societies just don't,
link |
00:42:22.280
you know, stick with things that annoy a lot of people and advertising currently annoys
link |
00:42:26.000
people more than it provides information.
link |
00:42:28.480
And I think that a Google probably is smart enough to figure out that this is a dead, this
link |
00:42:32.240
is a bad model, even though it's a hard huge amount of money and they'll have to figure
link |
00:42:35.760
out how to pull it away from it slowly, and I'm sure the CEO there will figure it out.
link |
00:42:39.920
But they need to do it and they needed it.
link |
00:42:44.200
So if you reduce advertising, not to zero, but you reduce it at the same time you bring
link |
00:42:47.680
up producer, consumer, actual real value being delivered.
link |
00:42:51.600
So real money is being paid and they take a 5% cut.
link |
00:42:54.720
That 5% could start to get big enough to cancel out the lost revenue from the kind of the
link |
00:42:58.840
poor kind of advertising.
link |
00:42:59.840
And I think that a good company will do that, will realize that.
link |
00:43:04.760
And they're, you know, Facebook, you know, again, God bless them.
link |
00:43:08.440
They bring, you know, grandmothers, you know, they bring children's pictures into grandmothers
link |
00:43:14.320
lives.
link |
00:43:15.320
It's fantastic.
link |
00:43:17.320
But they need to think of a new business model and that's the core problem there.
link |
00:43:22.440
Until they start to connect producer, consumer, I think they will just continue to make money
link |
00:43:26.440
and then buy the next social network company and then buy the next one.
link |
00:43:30.040
And the innovation level will not be high and the health issues will not go away.
link |
00:43:34.880
So I apologize that we kind of returned to words.
link |
00:43:37.920
I don't think the exact terms matter, but in sort of defensive advertisement, don't
link |
00:43:46.040
you think the kind of direct connection between consumer and creator, producer is the best,
link |
00:43:56.120
like the, is what advertisement strives to do, right?
link |
00:44:00.960
So it is best advertisement is literally now the Facebook is listening to our conversation
link |
00:44:06.680
and heard that you're going to India and we'll be able to actually start automatically for
link |
00:44:11.400
you making these connections and start giving this offer.
link |
00:44:14.520
So like, I apologize if it's just a matter of terms, but just to draw a distinction,
link |
00:44:19.800
is it possible to make advertisements just better and better and better algorithmically
link |
00:44:23.000
to where it actually becomes a connection almost a direct connection?
link |
00:44:26.040
That's a good question.
link |
00:44:27.040
So let's put it on that first of all, what we just talked about, I was defending advertising.
link |
00:44:31.880
Okay.
link |
00:44:32.880
So I was defending it as a way to get signals into a market that don't come any other way,
link |
00:44:36.280
especially algorithmically.
link |
00:44:37.720
It's a sign that someone spent money on it is a sign they think it's valuable.
link |
00:44:41.640
And if I think that if other things, someone else thinks it's valuable, then if I trust
link |
00:44:45.040
other people, I might be willing to listen.
link |
00:44:47.360
I don't trust that Facebook though is who's an intermediary between this.
link |
00:44:51.840
I don't think they care about me.
link |
00:44:54.720
Okay.
link |
00:44:55.720
I don't think they do.
link |
00:44:56.720
And I find it creepy that they know I'm going to India next week because of our conversation.
link |
00:45:01.080
Why do you think that?
link |
00:45:02.080
Can we just put your PR hat on?
link |
00:45:07.120
Why do you think you find Facebook creepy and not trust them as do majority of the population?
link |
00:45:14.200
So they're out of the Silicon Valley companies.
link |
00:45:16.280
I saw like, not approval rate, but there's ranking of how much people trust companies
link |
00:45:21.120
and Facebook is in the gutter.
link |
00:45:23.080
In the gutter, including people inside of Facebook.
link |
00:45:25.600
So what do you attribute that to?
link |
00:45:28.000
Because when I...
link |
00:45:29.000
Come on.
link |
00:45:30.000
You don't find it creepy that right now we're talking that I might walk out on the street
link |
00:45:32.240
right now that some unknown person who I don't know kind of comes up to me and says,
link |
00:45:36.000
I hear you going to India.
link |
00:45:37.500
I mean, that's not even Facebook.
link |
00:45:38.880
That's just a...
link |
00:45:39.880
I want transparency in human society.
link |
00:45:42.560
I want to have...
link |
00:45:43.560
If you know something about me, there's actually some reason you know something about me.
link |
00:45:47.080
It's something that if I look at it later and audit it kind of, I approve.
link |
00:45:51.560
You know something about me because you care in some way.
link |
00:45:54.560
There's a caring relationship even or an economic one or something.
link |
00:45:58.240
Not just that you're someone who could exploit it in ways I don't know about or care about
link |
00:46:02.000
or I'm troubled by or whatever.
link |
00:46:05.240
And we're in a world right now where that happened way too much and that Facebook knows
link |
00:46:09.640
things about a lot of people and could exploit it and does exploit it at times.
link |
00:46:14.720
I think most people do find that creepy.
link |
00:46:16.880
It's not for them.
link |
00:46:17.880
It's not...
link |
00:46:18.880
Facebook does not do it because they care about them in any real sense.
link |
00:46:23.440
And they shouldn't.
link |
00:46:24.440
They should not be a big brother caring about us.
link |
00:46:26.760
That is not the role of a company like that.
link |
00:46:28.840
Why not?
link |
00:46:29.840
Wait, not the big brother part but the caring, the trust thing.
link |
00:46:32.200
I mean, don't those companies...
link |
00:46:34.800
Just to linger on it because a lot of companies have a lot of information about us.
link |
00:46:38.400
I would argue that there's companies like Microsoft that has more information about us
link |
00:46:43.160
than Facebook does and yet we trust Microsoft more.
link |
00:46:46.040
But Microsoft is pivoting.
link |
00:46:47.560
Microsoft has decided this is really important.
link |
00:46:51.400
We don't want to do creepy things.
link |
00:46:53.360
Really want people to trust us to actually only use information in ways that they really
link |
00:46:56.760
would approve of, that we don't decide.
link |
00:47:00.400
And I'm just kind of adding that the health of a market is that when I connect to someone
link |
00:47:06.680
who produced or consumers, not just a random producer or consumer, it's people who see
link |
00:47:10.200
each other.
link |
00:47:11.200
They don't like each other but they sense that if they transact, some happiness will
link |
00:47:14.440
go up on both sides.
link |
00:47:16.000
If a company helps me to do that and moments that I choose of my choosing, then fine.
link |
00:47:22.840
So and also think about the difference between browsing versus buying, right?
link |
00:47:28.560
There are moments in my life, I just want to buy a gadget or something.
link |
00:47:31.720
I need something for that moment.
link |
00:47:33.080
I need some ammonia for my house or something because I got a problem in this bill.
link |
00:47:37.360
I want to just go in.
link |
00:47:38.360
I don't want to be advertised at that moment.
link |
00:47:40.040
I don't want to be led down very straight.
link |
00:47:42.440
That's annoying.
link |
00:47:43.440
Let's just go and have it extremely easy to do what I want.
link |
00:47:49.040
Other moments I might say, no, it's like today I'm going to the shopping mall.
link |
00:47:52.440
I want to walk around and see things and see people and be exposed to stuff.
link |
00:47:55.560
So I want control over that though.
link |
00:47:56.800
I don't want the company's algorithms to decide for me, right?
link |
00:48:00.200
And I think that's the thing.
link |
00:48:01.200
It's a total loss of control if Facebook thinks they should take the control from us of deciding
link |
00:48:05.360
when we want to have certain kinds of information, when we don't, what information that is, how
link |
00:48:09.200
much it relates to what they know about us that we didn't really want them to know about
link |
00:48:12.480
us.
link |
00:48:13.480
I don't want them to be helping me in that way.
link |
00:48:15.840
I don't want them to be helping them by they decide they have control over what I want
link |
00:48:21.640
and when.
link |
00:48:22.640
I totally agree.
link |
00:48:23.640
So Facebook, by the way, I have this optimistic thing where I think Facebook has the kind
link |
00:48:28.560
of personal information about us that could create a beautiful thing.
link |
00:48:32.480
So I'm really optimistic of what Facebook could do.
link |
00:48:36.200
It's not what it's doing, but what it could do.
link |
00:48:38.680
So I don't see that, I think that optimism is misplaced because you have to have a business
link |
00:48:43.400
model behind these things.
link |
00:48:45.400
Create a beautiful thing is really, let's be clear.
link |
00:48:48.480
It's about something that people would value.
link |
00:48:51.400
And I don't think they have that business model.
link |
00:48:53.840
And I don't think they will suddenly discover it by what, you know, a long hot shower.
link |
00:48:58.920
I disagree.
link |
00:48:59.920
I disagree in terms of, you can discover a lot of amazing things in a shower.
link |
00:49:04.840
So I didn't say that.
link |
00:49:06.240
I said they won't come.
link |
00:49:07.240
They won't.
link |
00:49:08.240
But in the shower, I think a lot of other people will discover it.
link |
00:49:11.280
I think that this guy, so I should also full disclosure, there's a company called United
link |
00:49:15.240
Masters, which I'm on their board and they've created this music market and they have 100,000
link |
00:49:19.080
artists now signed on and they've done things like gone to the NBA and the NBA, the music
link |
00:49:23.520
you find behind NBA Eclipse right now is their music, right?
link |
00:49:26.960
That's a company that had the right business model in mind from the get go, right, executed
link |
00:49:31.920
on that.
link |
00:49:32.920
And from day one, there was value brought to, so here you have a kid who made some songs
link |
00:49:37.200
who suddenly their songs are on the NBA website, right?
link |
00:49:41.240
That's really economic value to people.
link |
00:49:43.440
And so, you know.
link |
00:49:45.480
So you and I differ on the optimism of being able to sort of change the direction of the
link |
00:49:52.520
Titanic, right?
link |
00:49:54.400
So I.
link |
00:49:55.400
Yeah.
link |
00:49:56.400
I'm older than you.
link |
00:49:57.400
So I think the Titanic's crash.
link |
00:50:00.400
Got it.
link |
00:50:01.400
But just to elaborate, because I totally agree with you and I just want to know how difficult
link |
00:50:06.000
you think this problem is of, so for example, I want to read some news and I would, there's
link |
00:50:12.600
a lot of times in the day where something makes me either smile or think in a way where
link |
00:50:17.880
I like consciously think this really gave me value.
link |
00:50:20.840
Like I sometimes listen to the daily podcast in the New York Times, way better than the
link |
00:50:26.480
New York Times themselves, by the way, for people listening.
link |
00:50:29.320
That's like real journalism is happening for some reason in the podcast space.
link |
00:50:32.600
It doesn't make sense to me.
link |
00:50:33.800
But often I listen to it 20 minutes and I would be willing to pay for that, like $5,
link |
00:50:39.200
$10 for that experience and how difficult.
link |
00:50:44.120
That's kind of what you're getting at is that little transaction.
link |
00:50:48.240
How difficult is it to create a frictionless system like Uber has, for example, for other
link |
00:50:52.680
things?
link |
00:50:53.680
What's your intuition there?
link |
00:50:55.320
So first of all, I pay little bits of money to, you know, to send there's something called
link |
00:50:58.720
courts that does financial things.
link |
00:51:00.360
I like medium as a site, I don't pay there, but I would.
link |
00:51:04.520
You had a great post on medium.
link |
00:51:06.360
I would have loved to pay you a dollar and not others.
link |
00:51:10.280
I wouldn't have wanted it per se because there should be also sites where that's not actually
link |
00:51:15.560
the goal.
link |
00:51:16.560
The goal is to actually have a broadcast channel that I monetize in some other way if I chose
link |
00:51:20.240
to.
link |
00:51:21.240
I mean, I could now.
link |
00:51:22.480
People know about it.
link |
00:51:23.480
I could.
link |
00:51:24.480
I'm not doing it, but that's fine with me.
link |
00:51:26.360
Also the musicians who are making all this music, I don't think the right model is that
link |
00:51:29.760
you pay a little subscription fee to them, right?
link |
00:51:32.880
Because people can copy the bits too easily and it's just not that somewhere the value
link |
00:51:35.880
is.
link |
00:51:36.880
The value is that a connection was made with real between real human beings, then you can
link |
00:51:39.440
follow up on that, right?
link |
00:51:41.440
And create yet more value.
link |
00:51:42.960
So no, I think there's a lot of open questions here.
link |
00:51:46.560
Hot open questions, but also, yeah, I do want good recommendation systems that recommend
link |
00:51:50.120
cool stuff to me.
link |
00:51:51.360
But it's pretty hard, right?
link |
00:51:52.360
I don't like them to recommend stuff just based on my browsing history.
link |
00:51:55.840
I don't like that they're based on stuff they know about me, quote unquote.
link |
00:51:59.000
That's unknown about me is the most interesting.
link |
00:52:00.840
So this is the really interesting question.
link |
00:52:03.600
We may disagree, maybe not.
link |
00:52:05.840
I think that I love recommender systems and I want to give them everything about me in
link |
00:52:12.120
a way that I trust.
link |
00:52:13.400
Yeah, but you don't.
link |
00:52:14.840
Because so for example, this morning I clicked on, you know, I was pretty sleepy this morning.
link |
00:52:19.920
I clicked on a story about the Queen of England, right?
link |
00:52:24.360
I do not give a damn about the Queen of England.
link |
00:52:26.400
I really do not.
link |
00:52:27.520
But it was clickbait.
link |
00:52:28.520
It kind of looked funny and I had to say, what the heck are they talking about there?
link |
00:52:31.520
I don't want to have my life, you know, heading that direction.
link |
00:52:34.000
Now that's in my browsing history.
link |
00:52:36.160
The system and any reasonable system will think that I care about Queen of England.
link |
00:52:40.280
But that's browsing history.
link |
00:52:41.280
Right, but you're saying all the trace, all the digital exhaust or whatever, that's been
link |
00:52:44.640
kind of the models.
link |
00:52:45.640
If you collect all this stuff, you're going to figure all of us out.
link |
00:52:48.520
Well, if you're trying to figure out like kind of one person, like Trump or something,
link |
00:52:51.200
maybe you could figure him out.
link |
00:52:52.680
But if you're trying to figure out, you know, 500 million people, you know, no way, no way.
link |
00:52:58.000
Do you think so?
link |
00:52:59.000
No, I do.
link |
00:53:00.000
I think so.
link |
00:53:01.000
I think we are humans are just amazingly rich and complicated.
link |
00:53:02.600
Every one of us has our little quirks.
link |
00:53:04.080
Everyone else has our little things that could intrigue us, that we don't even know and will
link |
00:53:06.880
intrigue us.
link |
00:53:07.880
And there's no sign of it in our past.
link |
00:53:10.000
But by God, there it comes and, you know, you fall in love with it.
link |
00:53:12.960
And I don't want a company trying to figure that out for me and anticipate that.
link |
00:53:15.840
Okay, well, I want them to provide a forum, a market, a place that I kind of go and by
link |
00:53:21.280
hook or by crook, this happens.
link |
00:53:23.320
You know, I'm walking down the street and I hear some Chilean music being played and
link |
00:53:26.440
I never knew I liked Chilean music.
link |
00:53:28.080
Wow.
link |
00:53:29.080
So there is that side and I want them to provide a limited but, you know, interesting place
link |
00:53:33.680
to go.
link |
00:53:34.680
Right.
link |
00:53:35.680
And so don't try to use your AI to kind of, you know, figure me out and then put me in
link |
00:53:39.760
a world where you figured me out, you know, no, create huge spaces for human beings where
link |
00:53:45.120
our creativity and our style will be enriched and come forward and it'll be a lot of more
link |
00:53:50.080
transparency.
link |
00:53:51.080
I won't have people randomly, anonymously putting comments up and especially based on
link |
00:53:55.480
stuff they know about me, facts that, you know, we are so broken right now.
link |
00:54:00.080
If you're, you know, especially if you're a celebrity, but, you know, it's about anybody
link |
00:54:02.920
that anonymous people are hurting lots and lots of people right now.
link |
00:54:06.640
And that's part of this thing that Silicon Valley is thinking that, you know, just collect
link |
00:54:10.080
all this information and use it in a great way.
link |
00:54:12.520
So, you know, I'm not a pessimist, I'm very much an optimist, my nature, but I think that's
link |
00:54:16.440
just been the wrong path for the whole technology to take.
link |
00:54:19.960
Be more limited, create, let humans rise up.
link |
00:54:24.040
Don't try to replace them.
link |
00:54:25.760
That's the AI mantra.
link |
00:54:26.760
Don't try to anticipate them.
link |
00:54:28.680
Don't try to predict them because you're not going to, you're not going to be able to do
link |
00:54:32.920
those things.
link |
00:54:33.920
You're going to make things worse.
link |
00:54:34.920
Okay.
link |
00:54:35.920
So, right now, just give this a chance.
link |
00:54:38.760
Right now, the recommender systems are the creepy people in the shadow watching your
link |
00:54:43.840
every move.
link |
00:54:45.520
So they're looking at traces of you.
link |
00:54:47.800
They're not directly interacting with you, sort of your close friends and family, the
link |
00:54:53.000
way they know you is by having conversation, by actually having interactions back and forth.
link |
00:54:57.400
Do you think there's a place for recommender systems, sort of to step, because you just
link |
00:55:02.360
emphasize the value of human to human connection.
link |
00:55:04.520
But yeah, let's give it a chance, AI human connection.
link |
00:55:07.840
Is there a role for an AI system to have conversations with you in terms of, to try
link |
00:55:13.160
to figure out what kind of music you like, not by just watching what you listen to, but
link |
00:55:16.840
actually having a conversation, natural language or otherwise.
link |
00:55:19.640
Yeah.
link |
00:55:20.640
So I'm not against this, I just wanted to push back against the, maybe you were saying,
link |
00:55:24.160
you have options for Facebook.
link |
00:55:25.160
So there I think it's misplaced.
link |
00:55:27.160
But I think that distributing...
link |
00:55:28.660
I'm the one that's depending on Facebook.
link |
00:55:30.160
Yeah.
link |
00:55:31.160
No.
link |
00:55:32.160
So good for you.
link |
00:55:33.160
Go for it.
link |
00:55:34.160
That's a hard spot to be.
link |
00:55:35.160
Yeah.
link |
00:55:36.160
No.
link |
00:55:37.160
Good.
link |
00:55:38.160
Human interaction, like on our daily, the context around me in my own home is something that
link |
00:55:39.720
I don't want some big company to know about at all.
link |
00:55:41.360
But I would be more than happy to have technology help me with it.
link |
00:55:44.200
Which kind of technology?
link |
00:55:45.200
Well, you know, just...
link |
00:55:46.200
Alexa.
link |
00:55:47.200
Amazon.
link |
00:55:48.200
Well, Alexa's done right.
link |
00:55:49.200
I think Alexa's a research platform right now more than anything else.
link |
00:55:52.200
But Alexa done right, you know, could do things like I leave the water running in my garden
link |
00:55:56.520
and I say, hey, Alexa, the water's running in my garden.
link |
00:55:59.240
And even have Alexa figure out that that means when my wife comes home that she should be
link |
00:56:02.080
told about that.
link |
00:56:03.080
That's a little bit of a reasoning.
link |
00:56:04.800
I would call that AI and by any kind of stretch, it's a little bit of reasoning.
link |
00:56:08.440
And it actually kind of would make my life a little easier and better.
link |
00:56:11.040
And you know, I wouldn't call this a wow moment, but I kind of think that overall rice is human
link |
00:56:15.760
happiness up to have that kind of thing.
link |
00:56:18.360
And not when you're lonely, Alexa knowing loneliness.
link |
00:56:20.960
No, no, I don't want Alexa to feel intrusive.
link |
00:56:25.640
And I don't want just the designer of the system to kind of work all this out.
link |
00:56:28.480
I really want to have a lot of control and I want transparency and control.
link |
00:56:32.480
And if the company can stand up and give me that in the context of technology, I think
link |
00:56:36.800
they're going to first of all be way more successful than our current generation.
link |
00:56:39.360
And like I said, I was measuring Microsoft, you know, I really think they're pivoting
link |
00:56:43.280
to kind of be the trusted old uncle.
link |
00:56:45.200
You know, I think that they get that this is a way to go, that if you let people find
link |
00:56:49.280
technology empowers them to have more control and have control, not just over privacy, but
link |
00:56:53.920
over this rich set of interactions, that that people are going to like that a lot more.
link |
00:56:58.120
And that's, that's the right business model going forward.
link |
00:57:00.600
What does control over privacy look like?
link |
00:57:02.280
Do you think you should be able to just view all the data that?
link |
00:57:04.840
No, it's much more than that.
link |
00:57:06.000
I mean, first of all, it should be an individual decision.
link |
00:57:07.920
Some people don't want privacy.
link |
00:57:09.240
They want their whole life out there.
link |
00:57:10.760
Other people's want it.
link |
00:57:13.760
Privacy is not a zero one.
link |
00:57:16.080
It's not a legal thing.
link |
00:57:17.080
It's not just about which date is available, which is not.
link |
00:57:20.320
I like to recall to people that, you know, a couple of hundred years ago, everyone, there
link |
00:57:24.880
was not really big cities.
link |
00:57:26.240
Everyone lived in on the countryside and villages and in villages, everybody knew everything
link |
00:57:30.040
about you.
link |
00:57:31.040
Very, you didn't have any privacy.
link |
00:57:32.760
Is that bad?
link |
00:57:33.760
Are we better off now?
link |
00:57:34.760
Well, you know, arguably no, because what did you get for that loss of at least certain
link |
00:57:38.880
kinds of privacy?
link |
00:57:40.240
Well, people helped each other because they know everything about you.
link |
00:57:44.120
They know something bad's happening.
link |
00:57:45.440
They will help you with that.
link |
00:57:46.440
Right?
link |
00:57:47.440
And now you live in a big city, no one knows the amount.
link |
00:57:48.440
You get no help.
link |
00:57:50.880
So it kind of depends the answer.
link |
00:57:52.720
I want certain people who I trust and there should be relationships.
link |
00:57:56.360
I should kind of manage all those, but who knows what about me?
link |
00:57:59.040
I should have some agency there.
link |
00:58:00.800
It shouldn't be a drift in a city of technology where I have no agency.
link |
00:58:04.720
I don't want to go reading things and checking boxes.
link |
00:58:08.600
So I don't know how to do that.
link |
00:58:09.920
And I'm not a privacy researcher per se.
link |
00:58:11.880
I recognize the vast complexity of this.
link |
00:58:14.120
It's not just technology.
link |
00:58:15.120
It's not just legal scholars meeting technologists.
link |
00:58:18.920
There's got to be kind of a whole layers around it.
link |
00:58:20.920
And so when I alluded to this emerging engineering field, this is a big part of it.
link |
00:58:25.760
Well, like an electrical engineering come game, I'm not running around in the time,
link |
00:58:29.760
but you just didn't plug electricity into walls and all kind of work.
link |
00:58:34.120
You know, I had to have like underwriters laboratory that reassured you that that plug
link |
00:58:37.480
is not going to burn up your house and that that machine will do this and that and everything.
link |
00:58:41.640
There'll be whole people who can install things.
link |
00:58:44.440
There'll be people who can watch the installers.
link |
00:58:46.240
There'll be a whole layers, you know, an onion of these kind of things.
link |
00:58:49.720
And for things as deep and interesting as privacy, which is as least as interested as electricity,
link |
00:58:55.800
that's going to take decades to kind of work out, but it's going to require a lot of new
link |
00:58:58.920
structures that we don't have right now.
link |
00:59:00.200
So it's kind of hard to talk about it.
link |
00:59:02.080
And you're saying there's a lot of money to be made if you get it right.
link |
00:59:04.680
So I should look at a lot of money to be made.
link |
00:59:06.920
And all these things that provide human services and people recognize them as useful parts
link |
00:59:10.600
of their lives.
link |
00:59:12.360
So yeah.
link |
00:59:14.240
So yeah, the dialogue sometimes goes from the exuberant technologists to the no technology
link |
00:59:19.640
is good kind of.
link |
00:59:20.760
And that's, you know, in a public discourse, you know, in newsrooms, you see too much of
link |
00:59:24.440
this kind of thing.
link |
00:59:26.240
And the sober discussions in the middle, which are the challenging ones to have or where
link |
00:59:29.320
we need to be having our conversations.
link |
00:59:31.520
And you know, it's just not, actually, there's not many forum for those.
link |
00:59:36.440
You know, there's, that's, that's kind of what I would look for.
link |
00:59:39.160
Maybe I could go and I could read a comment section of something and it would actually
link |
00:59:42.040
be this kind of dialogue going back and forth.
link |
00:59:44.520
You don't see much of this, right?
link |
00:59:45.760
Which is why actually there's a resurgence of podcasts out of all, because people are
link |
00:59:49.800
really hungry for conversation, but there's technology is not helping much.
link |
00:59:55.720
So comment sections of anything, including YouTube is not hurting and not helping.
link |
01:00:03.360
And you think technically speaking is possible to help?
link |
01:00:07.800
I don't know the answers, but it's a less anonymity, a little more locality, you know,
link |
01:00:14.400
worlds that you kind of enter in and you trust the people there in those worlds so that when
link |
01:00:17.840
you start having a discussion, you know, not only is that people are not going to hurt
link |
01:00:20.560
you, but it's not going to be a total waste of your time because there's a lot of wasting
link |
01:00:23.680
of time that, you know, a lot of us, I pulled out of Facebook early on because it was clearly
link |
01:00:27.240
going to waste a lot of my time, even though there was some value.
link |
01:00:31.320
And so yeah, worlds that are somehow you enter in, you know what you're getting, and it's
link |
01:00:35.040
kind of appeals to you, new things might happen, but you kind of have some trust in that world.
link |
01:00:40.800
And there's some deep, interesting, complex psychological aspects around anonymity, how
link |
01:00:46.440
that changes human behavior that's quite dark.
link |
01:00:49.920
Quite dark, yeah.
link |
01:00:50.920
I think a lot of us are, especially those of us who really loved the advent of technology,
link |
01:00:55.400
I loved social networks when they came out, I didn't see any negatives there at all.
link |
01:00:59.520
But then I started seeing comment sections, I think it was maybe, you know, the CNN or
link |
01:01:03.720
something.
link |
01:01:04.720
And I started to go, wow, this, this darkness I just did not know about, and our technology
link |
01:01:10.040
now amplifying it.
link |
01:01:11.440
So sorry for the big philosophical question, but on that topic, do you think human beings,
link |
01:01:15.800
because you've also, out of all things, had a foot in psychology too, do you think human
link |
01:01:21.360
beings are fundamentally good?
link |
01:01:23.840
Like all of us have good intent that could be mined, or is it, depending on context and
link |
01:01:32.280
environment, everybody could be evil?
link |
01:01:34.960
So my answer is fundamentally good, but fundamentally limited.
link |
01:01:39.240
All of us have very, you know, blinkers on.
link |
01:01:41.320
We don't see the other person's pain that easily.
link |
01:01:43.920
We don't see the other person's point of view that easily.
link |
01:01:46.680
We're very much in our own head, in our own world.
link |
01:01:49.880
And on my good days, I think that technology could open us up to more perspectives and
link |
01:01:54.080
more, less blinkered and more understanding, you know, a lot of wars in human history happened
link |
01:01:58.560
because of just ignorance.
link |
01:01:59.800
They didn't, they thought the other person was doing this, well, that person wasn't
link |
01:02:02.600
doing this, and we have a huge amounts of that.
link |
01:02:05.400
But in my lifetime, I've not seen technology really help in that way yet.
link |
01:02:09.200
And I do, I do, I do believe in that, but, you know, no, I think fundamentally humans
link |
01:02:13.600
are good.
link |
01:02:14.600
People suffer.
link |
01:02:15.600
People have grieves, and so you have grudges, and those things cause them to do things they
link |
01:02:18.480
probably wouldn't want.
link |
01:02:19.960
They regret it often.
link |
01:02:22.600
So no, I, I, I think it's a, you know, part of the progress that technology is to indeed
link |
01:02:27.880
allow it to be a little easier to be the real good person you actually are.
link |
01:02:31.360
Well, but do you think individual human life or society can be modeled as an optimization
link |
01:02:39.480
problem?
link |
01:02:40.480
Um, not the way I think typically, I mean, that's, you're talking about one of the most
link |
01:02:44.680
complex phenomena in the whole, you know, in all, which individual human life or society
link |
01:02:49.600
is a whole.
link |
01:02:50.600
Both.
link |
01:02:51.600
Both.
link |
01:02:52.600
I mean, individual human life is, is amazingly complex and, um, so, uh, you know, optimization
link |
01:02:56.880
is kind of just one branch of mathematics that talks about certain kind of things.
link |
01:03:00.040
And, uh, it just feels way too limited for the complexity of, uh, such things.
link |
01:03:04.520
What properties of optimization problems do you think, so do you think most interesting
link |
01:03:09.440
problems that could be solved through optimization, uh, what kind of properties does that surface
link |
01:03:13.840
have nonconvexity, convexity, linearity, all those kinds of things, saddle points.
link |
01:03:19.720
Well, so optimization is just one piece of mathematics.
link |
01:03:22.120
You know, there's like, you just, even in our era, we're aware that say sampling, um,
link |
01:03:26.600
it's coming up examples of something, um, come in with a distribution.
link |
01:03:30.640
What's optimization?
link |
01:03:31.800
What's sampling?
link |
01:03:32.800
Well, you think you can, if you're a kind of a certain kind of math, but you can try
link |
01:03:35.920
to blend them and make them seem to be sort of the same thing.
link |
01:03:38.680
But optimization is roughly speaking, trying to, uh, find a point that, um, a single point
link |
01:03:43.920
that is the optimum of a criterion function of some kind, um, and sampling is trying to,
link |
01:03:50.240
from that same surface, treat that as a distribution or density and find prop points that have
link |
01:03:55.480
high density.
link |
01:03:56.480
So, um, I want the entire distribution and the sampling paradigm and I want the, um,
link |
01:04:02.360
you know, the single point, that's the best point in the, in the sample, in the, uh, optimization
link |
01:04:06.640
paradigm.
link |
01:04:07.640
Now, if you were optimizing in the space of probability measures, the output of that could
link |
01:04:11.880
be a whole probability distribution.
link |
01:04:13.080
So you can start to make these things the same, but in mathematics, if you go too high
link |
01:04:16.840
up that kind of abstraction hierarchy, you start to lose the, uh, you know, the ability
link |
01:04:21.320
to do the interesting theorems.
link |
01:04:22.880
So you kind of don't try that.
link |
01:04:23.880
You don't try to overly over abstract.
link |
01:04:26.960
So as a small tangent, what kind of world do you, do you find more appealing?
link |
01:04:31.560
One that is deterministic or stochastic?
link |
01:04:35.360
Uh, well, that's easy.
link |
01:04:36.880
I mean, I'm a statistician, you know, the world is highly stochastic.
link |
01:04:40.120
Wait, I don't know what's going to happen in the next five minutes.
link |
01:04:42.160
Right.
link |
01:04:43.160
Cause what you're going to ask, what we're going to do, what I'll say, massive uncertainty,
link |
01:04:47.400
you know, massive uncertainty.
link |
01:04:48.800
And so the best I can do is have come rough sense or probability distribution on things
link |
01:04:53.080
and somehow use that in my reasoning about what to do now.
link |
01:04:58.280
So how does the distributed at scale, when you have multi agent systems, uh, look like,
link |
01:05:07.080
so optimization can optimize sort of, it makes a lot more sense sort of, uh, at least mine
link |
01:05:13.800
from a robotics perspective, for a single robot, for a single agent, trying to optimize
link |
01:05:17.880
some objective function, uh, what, when you start to enter the real world, this game theory
link |
01:05:23.400
ready concepts starts popping up and that's how do you see optimization in this?
link |
01:05:30.600
Cause you've talked about markets and a scale.
link |
01:05:32.680
What does that look like?
link |
01:05:33.680
Do you see it as optimization?
link |
01:05:34.680
Do you see it as a sampling?
link |
01:05:36.120
Do you see like how, how should you, yeah, these all blend together.
link |
01:05:39.240
Um, and a system designer thinking about how to build an incentivized system will have
link |
01:05:43.800
a blend of all these things.
link |
01:05:44.880
So you know, a particle in a potential well is optimizing a functional called a Lagrangian.
link |
01:05:49.880
Right.
link |
01:05:50.880
The particle doesn't know that.
link |
01:05:51.880
There's no algorithm running that does that.
link |
01:05:54.600
It just happens.
link |
01:05:55.600
It's, it's, so it's a description mathematically of something that helps us understand as analysts,
link |
01:05:59.320
what's happening.
link |
01:06:00.320
All right.
link |
01:06:01.320
And so the same thing will happen when we talk about, you know, mixtures of humans and
link |
01:06:03.520
computers and markets and so on and so forth, there'll be certain principles that allow
link |
01:06:07.080
us to understand what's happening and whether or not the actual algorithms are being used
link |
01:06:10.320
by any sense is not clear.
link |
01:06:12.360
Um, now at, at some point I may have set up a multi agent or market kind of system and
link |
01:06:19.080
I'm now thinking about an individual agent in that system and they're asked to do some
link |
01:06:23.320
task and they're incentivized and somewhere they get certain signals and they, they have
link |
01:06:26.520
some utility.
link |
01:06:27.520
Maybe what they will do at that point is they just won't know the answer.
link |
01:06:30.800
They may have to optimize to find an answer.
link |
01:06:32.640
Okay.
link |
01:06:33.640
So an autos could be embedded inside of an overall market, uh, you know, and game theory
link |
01:06:37.680
is, is very, very broad.
link |
01:06:39.320
Um, it is often studied very narrowly for certain kinds of problems.
link |
01:06:43.320
Um, but it's roughly speaking, this is just the, I don't know what you're going to do.
link |
01:06:47.920
I kind of anticipate that a little bit and you anticipate what I'm anticipating and we
link |
01:06:51.640
kind of go back and forth in our own minds.
link |
01:06:53.320
We run kind of thought experiments.
link |
01:06:55.480
You've talked about this interesting point in terms of game theory, you know, most optimization
link |
01:07:00.400
problems really hate saddle points.
link |
01:07:02.840
Maybe you can describe what saddle points are, but I had heard you kind of mentioned
link |
01:07:07.040
that there's a, there's a branch of optimization that you could try to explicitly look for
link |
01:07:12.680
saddle points as a good thing.
link |
01:07:14.680
Oh, not optimization.
link |
01:07:15.840
That's just game theory.
link |
01:07:16.840
That's, so, uh, there's all kinds of different equilibrium game theory and some of them are
link |
01:07:21.080
highly explanatory behavior.
link |
01:07:23.200
They're not attempting to be algorithmic.
link |
01:07:24.720
They're just trying to say, if you happen to be at this equilibrium, you would see certain
link |
01:07:29.040
kind of behavior and we see that in real life.
link |
01:07:30.960
That's what an economist wants to do, especially behavioral economists, um, uh, in, in continuous,
link |
01:07:38.120
uh, differential game theory, you're in continuous spaces, a, um, some of the simplest
link |
01:07:43.100
equilibria are saddle points and Nash equilibrium as a saddle point, it's a special kind of
link |
01:07:47.120
saddle point.
link |
01:07:48.120
So, uh, classically in game theory, you were trying to find Nash equilibria and algorithmic
link |
01:07:53.560
game theory, you're trying to find algorithms that would find them, uh, and so you're trying
link |
01:07:56.840
to find saddle points.
link |
01:07:57.840
I mean, so that's, that's literally what you're trying to do.
link |
01:08:00.000
Um, but you know, any economist knows that Nash equilibria, uh, have their limitations.
link |
01:08:04.160
They are definitely not that explanatory in many situations.
link |
01:08:08.200
You're not what you really want.
link |
01:08:09.680
Um, there's other kind of equilibria and there's names associated with these cause they came
link |
01:08:14.200
from history with certain people working on them, but there'll be new ones emerging.
link |
01:08:18.080
So you know, one example is a Stackelberg equilibrium.
link |
01:08:21.200
So you know, Nash, you and I are both playing this game against each other or for each other,
link |
01:08:25.840
maybe it's cooperative and we're both going to think it through and then we're going to
link |
01:08:29.000
decide and we're going to do our thing simultaneously.
link |
01:08:32.520
You know, in a Stackelberg, no, I'm going to be the first mover.
link |
01:08:34.640
I'm going to make a move.
link |
01:08:35.880
You're going to look at my move and then you're going to make yours.
link |
01:08:38.000
Now, since I know you're going to look at my move, I anticipate what you're going to
link |
01:08:41.760
do.
link |
01:08:42.760
And so I don't do something stupid, but, and, but then I know that you were also anticipating
link |
01:08:46.360
me.
link |
01:08:47.360
So we're kind of going back and so forth.
link |
01:08:48.360
Why, but there is then a first mover thing.
link |
01:08:51.800
And so there's a different equilibria, all right.
link |
01:08:54.920
And, uh, so just mathematically, yeah, these things have certain topologies and certain
link |
01:08:58.840
shapes that are like salivates and then algorithmically or dynamically, how do you move towards them?
link |
01:09:02.840
How do you move away from things?
link |
01:09:04.960
Um, you know, so some of these questions have answers.
link |
01:09:07.600
They've been studied.
link |
01:09:08.720
Others do not, and especially if it becomes stochastic, especially if there's large numbers
link |
01:09:13.160
of decentralized things, there's just, uh, you know, young people get in this field who
link |
01:09:16.840
kind of think it's all done because we have, you know, TensorFlow.
link |
01:09:19.440
Well, no, these are all open problems and they're really important and interesting.
link |
01:09:23.680
And it's about strategic settings.
link |
01:09:25.160
How do I collect data?
link |
01:09:26.160
I suppose I don't know what you're going to do because I don't know you very well, right?
link |
01:09:29.280
Well, I got to collect data about you.
link |
01:09:31.200
So maybe I want to push you in a part of the space where I don't know much about you.
link |
01:09:34.400
So I can get data.
link |
01:09:35.400
And then later I'll realize that you'll never, you'll never go there because of the way the
link |
01:09:39.120
game is set up.
link |
01:09:40.120
But, you know, that's part of the overall, you know, data analysis context.
link |
01:09:43.200
Yeah.
link |
01:09:44.200
Even the game of poker is fascinating space.
link |
01:09:45.920
Yeah.
link |
01:09:46.920
Whenever there's any uncertainty or lack of information, it's, it's a super exciting
link |
01:09:50.320
space.
link |
01:09:51.320
Yeah.
link |
01:09:52.320
Uh, just a lingering optimization for a second.
link |
01:09:55.320
So when we look at deep learning, it's essentially a minimization of a complicated loss function.
link |
01:10:01.560
So is there something insightful or hopeful that you see in the kinds of function surface
link |
01:10:07.360
that lost functions that deep learning in, in the real world is trying to optimize over?
link |
01:10:13.760
Is there something interesting as it's just the usual kind of problems of optimization?
link |
01:10:20.000
I think from an optimization point of view, that surface first of all, it's pretty smooth.
link |
01:10:25.600
And secondly, if there's over, if it's over parameterized, there's kind of lots of paths
link |
01:10:29.040
down to reasonable optima.
link |
01:10:31.520
And so kind of the getting downhill to the, to an optimum is, is viewed as not as hard
link |
01:10:35.600
as you might have expected in high dimensions.
link |
01:10:39.920
The fact that some optima tend to be really good ones and others not so good and you tend
link |
01:10:43.600
to, it's not, sometimes you find the good ones is, is sort of still needs explanation.
link |
01:10:48.560
Yes.
link |
01:10:49.560
But, but the particular surface is coming from the particular generation of neural nets.
link |
01:10:53.520
I kind of suspect those will, those will change in 10 years.
link |
01:10:56.840
It will not be exactly those surfaces.
link |
01:10:58.440
There'll be some others that are, an optimization theory will help contribute to why other surfaces
link |
01:11:02.480
or why other algorithms.
link |
01:11:05.800
Layers of arithmetic operations with a little bit of nonlinearity, that's not, that didn't
link |
01:11:09.800
come from neuroscience per se.
link |
01:11:10.960
I mean, maybe in the minds of some of the people working on it, they were thinking about brains,
link |
01:11:14.520
but they were arithmetic circuits in all kinds of fields, you know, computer science control
link |
01:11:19.000
theory and so on.
link |
01:11:20.640
And that layers of these could transform things in certain ways and that if it's smooth, maybe
link |
01:11:24.800
you could, you know, find parameter values, you know, it's a big, is a, is a, is a, is
link |
01:11:31.040
a sort of big discovery that it's, it's working.
link |
01:11:33.280
It's able to work at this scale, but I don't think that we're stuck with that and we're,
link |
01:11:39.080
we're certainly not stuck with that because we're understanding the brain.
link |
01:11:42.120
So in terms of, on the algorithm side, sort of gradient descent, do you think we're stuck
link |
01:11:46.320
with gradient descent as variants of it?
link |
01:11:49.320
What variants do you find interesting or do you think there'll be something else invented
link |
01:11:53.560
that is able to walk all over these optimization spaces in more interesting ways?
link |
01:11:59.720
So there's a co design of the surface and they are the architecture and the algorithm.
link |
01:12:04.680
So if you just ask if we stay with the kind of architectures that we have now, not just
link |
01:12:08.240
neural nets, but, you know, phase retrieval architectures or maybe completion architectures
link |
01:12:13.080
and so on.
link |
01:12:15.080
You know, I think we've kind of come to a place where a stochastic gradient algorithms
link |
01:12:19.560
are dominant and there are versions that, you know, they're a little better than others.
link |
01:12:25.800
They, you know, have more guarantees, they're more robust and so on and there's ongoing
link |
01:12:29.840
research to kind of figure out which is the best time for which situation.
link |
01:12:34.240
But I think that that'll start to co evolve, that that'll put pressure on the actual architecture
link |
01:12:37.640
and so we shouldn't do it in this particular way, we should do it in a different way because
link |
01:12:40.800
this other algorithm is now available if you do it in a different way.
link |
01:12:45.320
So that I can't really anticipate that co evolution process, but you know, gradients
link |
01:12:51.560
are amazing mathematical objects.
link |
01:12:54.480
They have a lot of people who sort of study them more deeply mathematically or kind of
link |
01:13:01.120
shocked about what they are and what they can do.
link |
01:13:03.600
I mean, to think about this way, if I suppose that I tell you, if you move along the x axis,
link |
01:13:08.560
you get, you know, you go uphill in some objective by, you know, three units, whereas if you move
link |
01:13:13.880
along the y axis, you go uphill by seven units, right now, I'm going to only allow you to
link |
01:13:19.080
move a certain, you know, unit distance, right?
link |
01:13:22.440
What are you going to do?
link |
01:13:23.440
Well, the most not people will say, I'm going to go along the y axis, I'm getting the biggest
link |
01:13:27.240
bang for my buck, you know, and my buck is only one unit.
link |
01:13:30.280
So I'm going to put all of it in the y axis, right?
link |
01:13:33.920
And why should I even take any of my strength, my step size and put any of it in the x axis
link |
01:13:39.360
because I'm getting less bang for my buck.
link |
01:13:41.480
That seems like a completely, you know, clear argument and it's wrong because the gradient
link |
01:13:46.880
direction is not to go along the y axis, it's to take a little bit of the x axis.
link |
01:13:51.800
And that, to understand that, you have to, you have to know some math.
link |
01:13:55.440
And so even a, you know, trivial, so called operator like gradient is not trivial and
link |
01:14:00.360
so, you know, exploiting its properties is still very, very important.
link |
01:14:04.000
Now we know that just creating descent has got all kinds of problems that get stuck in
link |
01:14:06.840
many ways and it had never, you know, good dimension dependence and so on.
link |
01:14:10.960
So my own line of work recently has been about what kinds of stochasticity, how can we get
link |
01:14:15.960
dimension dependence?
link |
01:14:16.960
How can we do the theory of that?
link |
01:14:19.240
And we've come up with pretty favorable results with certain kinds of stochasticity.
link |
01:14:22.720
We have sufficient conditions generally.
link |
01:14:25.000
We know if you, if you do this, we will give you a good guarantee.
link |
01:14:28.760
We don't have necessary conditions that it must be done a certain way in general.
link |
01:14:32.280
So stochasticity, how much randomness to inject into the, into the walking along the gradient.
link |
01:14:38.200
And what kind of randomness?
link |
01:14:40.000
Why is randomness good in this process?
link |
01:14:42.240
Why is stochasticity good?
link |
01:14:44.200
Yeah.
link |
01:14:45.200
So I can give you simple answers, but in some sense, again, it's kind of amazing.
link |
01:14:49.280
Stochasticity just, you know, particular features of a surface that could have hurt
link |
01:14:55.120
you if you were doing one thing, you know, deterministically won't hurt you because,
link |
01:15:00.000
you know, by chance, you know, there's very little chance that you would get hurt.
link |
01:15:03.560
And, you know, so here stochasticity, you know, it's just kind of saves you from some
link |
01:15:11.200
of the particular features of surfaces that, you know, in fact, if you think about, you
link |
01:15:16.720
know, surfaces that are discontinuous in a first derivative, like, you know, absolute
link |
01:15:19.960
value function, you will go down and hit that point where there's non differentiability.
link |
01:15:24.640
Right.
link |
01:15:25.640
And if you're running a deterministic argument at that point, you can really do something
link |
01:15:28.600
bad.
link |
01:15:29.600
Right.
link |
01:15:30.600
Where stochasticity just means it's pretty unlikely that's going to happen.
link |
01:15:32.800
You're going to get, you're going to hit that point.
link |
01:15:35.720
So, you know, it's, again, not trivial to analyze, but, especially in higher dimensions,
link |
01:15:40.960
also stochasticity, our intuition isn't very good about it, but it has properties that
link |
01:15:44.400
kind of are very appealing in high dimensions for kind of law of large number of reasons.
link |
01:15:49.200
So it's all part of the mathematics to kind of, that's what's fun to work in the field
link |
01:15:52.520
is that you get to try to understand this mathematics.
link |
01:15:57.040
But long story short, you know, partly empirically, it was discovered stochastic gradient is very
link |
01:16:01.200
effective and theory kind of followed, I'd say, that, but I don't see that we're getting
link |
01:16:06.640
it clearly out of that.
link |
01:16:09.120
What's the most beautiful, mysterious, or profound idea to you in optimization?
link |
01:16:15.560
I don't know the most, but let me just say that, you know, Nesterov's work on Nesterov
link |
01:16:20.440
acceleration to me is pretty surprising and pretty deep.
link |
01:16:26.280
Can you elaborate?
link |
01:16:27.280
Well, Nesterov acceleration is just that, I suppose that we are going to use gradients
link |
01:16:32.240
to move around into space for the reasons I've alluded to, there are nice directions
link |
01:16:35.960
to move.
link |
01:16:37.280
And suppose that I tell you that you're only allowed to use gradients, you're not going
link |
01:16:40.520
to be allowed to use this local person, it can only sense kind of the change in the surface.
link |
01:16:47.400
But I'm going to give you kind of a computer that's able to store all your previous gradients.
link |
01:16:50.880
And so you start to learn something about the surface.
link |
01:16:55.000
And I'm going to restrict you to maybe move in the direction of like a linear span of
link |
01:16:58.600
all the gradients.
link |
01:16:59.600
So you can't kind of just move in some arbitrary direction, right?
link |
01:17:02.840
So now we have a well defined mathematical complexity model.
link |
01:17:05.680
There's a certain classes of algorithms that can do that and others that can't.
link |
01:17:09.280
And we can ask for certain kinds of surfaces, how fast can you get down to the optimum?
link |
01:17:13.800
So there's an answers to these.
link |
01:17:14.920
So for a, you know, a smooth convex function, there's an answer, which is one over the number
link |
01:17:20.680
of steps squared is that you will be within a ball of that size after K steps.
link |
01:17:29.120
Gradient descent in particular has a slower rate.
link |
01:17:31.520
It's one over K. Okay.
link |
01:17:35.440
So you could ask, is gradient descent actually, even though we know it's a good algorithm,
link |
01:17:38.960
is it the best algorithm in the sense of the answer is no, but well, well, not clear yet
link |
01:17:43.760
because one over K score is a lower bound.
link |
01:17:47.440
That's probably the best you can do.
link |
01:17:49.360
Gradient is one over K, but these are something better.
link |
01:17:52.720
And so I think as a surprise to most, the Nesterov discovered a new algorithm that has got two
link |
01:17:59.240
pieces to it.
link |
01:18:00.240
It uses two gradients and puts those together in a certain kind of obscure way.
link |
01:18:06.600
And the thing doesn't even move downhill all the time.
link |
01:18:09.240
It sometimes goes back uphill.
link |
01:18:10.720
And if you're a physicist, that kind of makes some sense.
link |
01:18:13.160
You're building up some momentum and that is kind of the right intuition, but that that
link |
01:18:17.240
intuition is not enough to understand kind of how to do it and why it works.
link |
01:18:22.440
But it does.
link |
01:18:23.440
It achieves one over K squared and it has a mathematical structure and it's still kind
link |
01:18:27.480
of to this day, a lot of us are writing papers and trying to explore that and understand it.
link |
01:18:32.520
So there are lots of cool ideas and optimization, but just kind of using gradients, I think
link |
01:18:36.600
is number one that goes back, you know, 150 years.
link |
01:18:40.720
And then Nesterov, I think has made a major contribution with this idea.
link |
01:18:43.560
So like you said, gradients themselves are in some sense mysterious.
link |
01:18:47.320
Yeah, they're not, they're not as trivial as not as much as coordinate descent is more
link |
01:18:53.240
of a trivial one.
link |
01:18:54.240
You just pick one of the coordinates.
link |
01:18:55.240
That's how we think that's how our human minds think and gradients are not that easy
link |
01:18:59.880
for our human mind to grapple with.
link |
01:19:03.280
An absurd question, but what is statistics?
link |
01:19:08.600
So here it's a little bit, it's somewhere between math and science and technology.
link |
01:19:12.120
It's somewhere in that convex hole.
link |
01:19:13.400
So it's a set of principles that allow you to make inferences that have got some reason
link |
01:19:17.680
to be believed and also principles that allow you to make decisions where you can have some
link |
01:19:22.600
reason to believe you're not going to make errors.
link |
01:19:25.080
So all of that requires some assumptions about what do you mean by an error?
link |
01:19:27.640
What do you mean by, you know, the probabilities and, but, you know, after you start making
link |
01:19:33.320
some assumptions, you're led to conclusions that, yes, I can guarantee that, you know,
link |
01:19:39.640
if you do this in this way, your probability of making an error will be small.
link |
01:19:43.600
Your probability of continuing to not make errors over time will be small.
link |
01:19:47.880
And probability you found something that's real will be small, will be high.
link |
01:19:52.400
So decision making is a big part of that?
link |
01:19:54.680
So decision making is a big part, yeah.
link |
01:19:55.760
So the original, so statistics, you know, short history was that, you know, it's kind
link |
01:20:00.120
of goes back sort of as a formal discipline, you know, 250 years or so.
link |
01:20:04.960
It was called inverse probability because around that era, probability was developed
link |
01:20:09.280
sort of especially to explain gambling situations.
link |
01:20:12.880
Of course.
link |
01:20:14.000
And interesting.
link |
01:20:15.480
So you would say, well, given the state of nature is this, there's a certain roulette
link |
01:20:18.880
board that has a certain mechanism in it.
link |
01:20:21.120
What kind of outcomes do I expect to see?
link |
01:20:23.680
And especially if I do things longer, longer amounts of time, what outcomes will I see
link |
01:20:27.440
and the physicists start to pay attention to this?
link |
01:20:30.640
And then people say, well, given, let's turn the problem around.
link |
01:20:33.520
What if I saw certain outcomes, could I infer what the underlying mechanism was?
link |
01:20:37.480
That's an inverse problem.
link |
01:20:38.480
And in fact, for quite a while, statistics was called inverse probability.
link |
01:20:41.960
That was the name of the field.
link |
01:20:44.040
And I believe that it was Laplace, who was working in Napoleon's government, who was
link |
01:20:49.640
trying, who needed to do a census of France, learn about the people there.
link |
01:20:54.240
So he went and gathered data and he analyzed that data to determine policy and said, let's
link |
01:21:01.240
call this field that does this kind of thing, statistics, because the word state is in there.
link |
01:21:07.360
And in French, that's état, but it's the study of data for the state.
link |
01:21:12.000
So anyway, that caught on and it's been called statistics ever since.
link |
01:21:18.600
But by the time it got formalized, it was sort of in the 30s.
link |
01:21:23.240
And around that time, there was game theory and decision theory developed nearby.
link |
01:21:28.560
People in that era didn't think of themselves as either computer science or statistics or
link |
01:21:31.600
control or econ.
link |
01:21:32.600
They were all, they were all the above.
link |
01:21:34.520
And so, you know, von Neumann is developing game theory, but also thinking of that as
link |
01:21:37.880
decision theory, Wald is an econometrician, developing decision theory, and then, you
link |
01:21:43.000
know, turning that into statistics.
link |
01:21:45.120
And so it's all about, here's not just data and you analyze it, here's a loss function,
link |
01:21:50.120
here's what you care about, here's the question you're trying to ask.
link |
01:21:53.080
Here is a probability model and here's the risk you will face if you make certain decisions.
link |
01:21:59.440
And to this day, in most advanced statistical curricula, you teach decision theory as the
link |
01:22:04.040
starting point, and then it branches out into the two branches of Bayesian and Frequentist.
link |
01:22:08.480
But that's, it's all about decisions.
link |
01:22:11.800
In statistics, what is the most beautiful, mysterious, maybe surprising idea that you've
link |
01:22:18.880
come across?
link |
01:22:20.520
Yeah, good question.
link |
01:22:25.280
I mean, there's a bunch of surprising ones, there's something that's way too technical
link |
01:22:28.960
for this thing, but something called James Stein estimation, which is kind of surprising
link |
01:22:33.440
and really takes time to wrap your head around.
link |
01:22:36.000
Can you try to maybe?
link |
01:22:37.000
I think I don't want to even want to try.
link |
01:22:39.160
Let me just say a colleague at Steven Stigler at University of Chicago wrote a really beautiful
link |
01:22:44.200
paper on James Stein estimation, which is helps to, its views of paradox, it kind of
link |
01:22:48.840
defeats the mind's attempts to understand it, but you can, and Steve has a nice perspective
link |
01:22:52.960
on that.
link |
01:22:56.560
So one of the troubles with statistics is that it's like in physics, that are in quantum
link |
01:23:00.320
physics, you have multiple interpretations.
link |
01:23:02.520
There's a wave and particle duality in physics and you get used to that over time, but it's
link |
01:23:07.640
still kind of haunts you that you don't really, you know, quite understand the relationship.
link |
01:23:11.680
The electrons of wave and electrons of particle, well, well, same thing happens here.
link |
01:23:16.760
There's Bayesian ways of thinking and Frequentist and they are different.
link |
01:23:20.440
They, they often, they sometimes become sort of the same in practice, but they are physically
link |
01:23:24.560
different.
link |
01:23:25.560
And then in some practice, they are not the same at all.
link |
01:23:27.640
They give you rather different answers.
link |
01:23:30.480
And so it is very much like wave and particle duality and that is something that you have
link |
01:23:33.880
to kind of get used to in the field.
link |
01:23:35.840
Can you define Bayesian and Frequentist?
link |
01:23:37.440
Yeah.
link |
01:23:38.440
In decision theory, you can make, I have a, like I have a video that people could see
link |
01:23:41.080
it's called, are you a Bayesian or a Frequentist and kind of help try to, to, to make it really
link |
01:23:45.840
clear.
link |
01:23:46.840
It comes from decision theory.
link |
01:23:47.840
So, you know, decision theory, you're talking about loss functions, which are a function
link |
01:23:51.920
of data X and parameter theta.
link |
01:23:54.840
So there are a function of two arguments, okay?
link |
01:23:58.520
Neither one of those arguments is known.
link |
01:23:59.880
You don't know the data a priori.
link |
01:24:01.640
It's random and the parameters unknown, all right?
link |
01:24:04.080
So you have this function of two things you don't know and you're trying to say, I want
link |
01:24:07.120
that function to be small.
link |
01:24:08.200
I want small loss, right?
link |
01:24:10.880
Well, what are you going to do?
link |
01:24:13.440
So you sort of say, well, I'm going to average over these quantities or maximize over them
link |
01:24:17.280
or something so that, you know, I turn that uncertainty into something certain.
link |
01:24:23.120
So you could look at the first argument and average over it or you could look at the second
link |
01:24:26.200
argument average over it.
link |
01:24:27.200
That's Bayesian Frequentist.
link |
01:24:28.200
The Frequentist says, I'm going to look at the X, the data, and I'm going to take that
link |
01:24:32.960
as random and I'm going to average over the distribution.
link |
01:24:35.320
So I take the expectation loss under X, theta is held fixed, all right?
link |
01:24:40.680
That's called the risk.
link |
01:24:42.120
And so it's looking at all the data sets you could get, all right?
link |
01:24:46.440
And say how well will a certain procedure do under all those data sets?
link |
01:24:50.160
That's called a Frequentist guarantee, all right?
link |
01:24:52.520
So I think it is very appropriate when you're building a piece of software and you're shipping
link |
01:24:56.400
it out there and people are using all kinds of data sets.
link |
01:24:59.280
You want to have a stamp, a guarantee on it that as people run it on many, many data sets
link |
01:25:02.560
that you never even thought about that 95% of the time it will do the right thing.
link |
01:25:07.720
Perfectly reasonable.
link |
01:25:09.800
The Bayesian Perspective says, well, no, I'm going to look at the other argument of the
link |
01:25:13.120
loss function, the theta part, okay?
link |
01:25:15.240
That's unknown and I'm uncertain about it.
link |
01:25:17.600
So I could have my own personal probability for what it is, you know, how many tall people
link |
01:25:21.520
are there out there?
link |
01:25:22.520
I'm trying to infer the average height of the population.
link |
01:25:24.040
Well, I have an idea roughly what the height is.
link |
01:25:27.440
So I'm going to average over the theta.
link |
01:25:32.200
So now that loss function has only now, again, one argument's gone.
link |
01:25:37.240
Now it's a function of X.
link |
01:25:38.880
And that's what a Bayesian does is they say, well, let's just focus on the particular X
link |
01:25:41.920
we got, the data set we got, we condition on that.
link |
01:25:45.320
Condition on the X, I say something about my loss.
link |
01:25:48.240
That's a Bayesian approach to things.
link |
01:25:50.480
And the Bayesian will argue that it's not relevant to look at all the other data sets
link |
01:25:54.320
you could have gotten and average over them, the frequentist approach.
link |
01:25:58.800
It's really only the data sets you got, all right?
link |
01:26:02.080
And I do agree with that, especially in situations where you're working with a scientist, you
link |
01:26:06.000
can learn a lot about the domain and you're really only focused on certain kinds of data
link |
01:26:09.440
and you've gathered your data and you make inferences.
link |
01:26:13.320
I don't agree with it though that, you know, in the sense that there are needs for frequentist
link |
01:26:17.600
guarantees.
link |
01:26:18.600
In the software people are using out there, you want to say something.
link |
01:26:20.880
So these two things have to fight each other a little bit, but they have to blend.
link |
01:26:24.880
So long story short, there's a set of ideas that are right in the middle.
link |
01:26:27.600
They're called empirical bays.
link |
01:26:29.880
And empirical bays sort of starts with the Bayesian framework.
link |
01:26:34.600
It's kind of arguably philosophically more, you know, reasonable and kosher, right down
link |
01:26:40.960
a bunch of the math that kind of flows from that and then realize there's a bunch of things
link |
01:26:44.520
you don't know because it's the real world and you don't know everything.
link |
01:26:48.040
So you're uncertain about certain quantities.
link |
01:26:50.160
At that point, ask, is there a reasonable way to plug in an estimate for those things?
link |
01:26:54.520
Okay.
link |
01:26:55.520
And in some cases, there's quite a reasonable thing to do to plug in.
link |
01:26:59.880
There's a natural thing you can observe in the world that you can plug in and then do
link |
01:27:03.400
a little bit more mathematics and assure yourself it's really good.
link |
01:27:06.040
So based on math or based on human expertise, what are the good things?
link |
01:27:10.040
They're both going in.
link |
01:27:11.040
The Bayesian framework allows you to put a lot of human expertise in.
link |
01:27:15.560
But the math kind of guides you along that path and then kind of reassures at the end,
link |
01:27:19.000
you could put that stamp of approval under certain assumptions, this thing will work.
link |
01:27:22.760
So you asked question, was my favorite, you know, or was the most surprising, nice idea.
link |
01:27:26.100
So one that is more accessible is something called false discovery rate, which is, you
link |
01:27:31.800
know, you're making not just one hypothesis test or making one decision, you're making
link |
01:27:35.520
a whole bag of them.
link |
01:27:37.440
And in that bag of decisions, you look at the ones where you made a discovery, you announced
link |
01:27:41.800
that something interesting had happened.
link |
01:27:43.360
All right.
link |
01:27:44.360
That's going to be some subset of your big bag.
link |
01:27:47.160
In the ones you made a discovery, which subset of those are bad?
link |
01:27:50.840
There are false, false discoveries.
link |
01:27:53.280
You like the fraction of your false discoveries among your discoveries to be small.
link |
01:27:57.640
That's a different criterion than accuracy or precision or recall or sensitivity and
link |
01:28:02.120
specificity.
link |
01:28:03.120
It's a different quantity.
link |
01:28:04.920
Those latter ones are almost all of them have more of a frequentist flavor.
link |
01:28:09.920
They say, given the truth is that the null hypothesis is true, here's what accuracy I
link |
01:28:14.760
would get or given that the alternative is true.
link |
01:28:17.200
Here's what I would get.
link |
01:28:18.200
So it's kind of going forward from the state of nature to the data.
link |
01:28:22.360
The Bayesian goes the other direction from the data back to the state of nature.
link |
01:28:25.880
And that's actually what false discovery rate is.
link |
01:28:28.160
It says, given you made a discovery, okay, that's conditioned on your data.
link |
01:28:32.640
What's the probability of the hypothesis is going the other direction.
link |
01:28:36.880
And so the classical frequents look at that and say, well, I can't know that there's
link |
01:28:40.400
some priors needed in that.
link |
01:28:42.560
And the empirical Bayesian goes ahead and plows forward and starts writing down to these
link |
01:28:46.920
formulas and realizes at some point, some of those things can actually be estimated
link |
01:28:50.760
in a reasonable way.
link |
01:28:52.560
And so it's a beautiful set of ideas.
link |
01:28:54.200
So this kind of line of argument has come out, it's not certainly mine, but it sort
link |
01:28:58.640
of came out from Robbins around 1960.
link |
01:29:02.320
Brad Efron has written beautifully about this in various papers and books.
link |
01:29:07.320
And the FDR is, you know, Ben Yamini in Israel, John Story did this Bayesian interpretation
link |
01:29:14.120
and so on.
link |
01:29:15.120
So I've just absorbed these things over the years and finally did a very healthy way to
link |
01:29:18.280
think about statistics.
link |
01:29:21.280
Let me ask you about intelligence to jump slightly back out into philosophy, perhaps.
link |
01:29:28.240
You said that maybe you can elaborate, but you said that defining just even the question
link |
01:29:33.960
of what is intelligence is a very difficult question.
link |
01:29:38.800
Is it a useful question?
link |
01:29:39.800
Do you think we'll one day understand the fundamentals of human intelligence and what
link |
01:29:43.920
it means, you know, have good benchmarks for general intelligence that we put before
link |
01:29:51.720
our machines?
link |
01:29:53.560
So I don't work on these topics so much.
link |
01:29:55.480
You're really asking the question of for a psychologist, really, and I studied some,
link |
01:29:59.680
but I don't consider myself at least an expert at this point.
link |
01:30:04.480
You know, a psychologist aims to understand human intelligence, right?
link |
01:30:07.720
And I think maybe the psychologists, I know are fairly humble about this.
link |
01:30:11.200
They might try to understand how a baby understands, you know, whether something's a solid or liquid
link |
01:30:15.920
or whether something's hidden or not.
link |
01:30:18.760
And maybe how a child starts to learn the meaning of certain words, what's a verb, what's
link |
01:30:24.440
a noun, and also, you know, slowly but surely trying to figure out things.
link |
01:30:30.600
But humans ability to take a really complicated environment, reason about it, abstract about
link |
01:30:35.640
it, find the right abstractions, communicate about it, interact and so on is just, you
link |
01:30:41.480
know, really staggeringly rich and complicated.
link |
01:30:46.920
And so, you know, I think in all humidly, we don't think we're kind of aiming for that
link |
01:30:51.320
in the near future, a certain like psychologist doing experiments with babies in the lab or
link |
01:30:55.000
with people talking has a much more limited aspiration.
link |
01:30:58.920
And you know, Kahneman Dversky would look at our reasoning patterns and they're not deeply
link |
01:31:02.520
understanding all the how we do our reasoning, but they're sort of saying, here's some oddities
link |
01:31:06.440
about the reasoning and some things you need to think about it.
link |
01:31:09.480
But also, as I emphasize in some things I've been writing about, you know, AI, the revolution
link |
01:31:14.520
hasn't happened yet.
link |
01:31:15.520
Yeah.
link |
01:31:16.520
Great blog post.
link |
01:31:17.520
I've been emphasizing that, you know, if you step back and look at intelligence systems
link |
01:31:22.560
of any kind and whatever you mean by intelligence, it's not just the humans or the animals or
link |
01:31:26.640
you know, the plants or whatever, you know, so a market that brings goods into a city,
link |
01:31:31.640
you know, food to restaurants or something every day is a system.
link |
01:31:35.640
It's a decentralized set of decisions looking at it from far enough away, just like a collection
link |
01:31:39.480
of neurons.
link |
01:31:40.480
Everyone, every neuron is making its own little decisions, presumably in some way.
link |
01:31:44.560
And if you step back enough, every little part of an economic system is making it solid
link |
01:31:47.920
of its decisions.
link |
01:31:49.560
And just like with a brain, who knows what any of the neuron doesn't know what the overall
link |
01:31:53.000
goal is, right, but something happens at some aggregate level.
link |
01:31:57.120
Same thing with the economy.
link |
01:31:58.560
People eat in a city and it's robust.
link |
01:32:01.360
It works at all scales, small villages to big cities.
link |
01:32:04.840
It's been working for thousands of years, it works rain or shine, so it's adaptive.
link |
01:32:10.520
So all the kind of, you know, those are adjectives, one tends to apply to intelligent systems,
link |
01:32:14.800
robust, adaptive, you know, you don't need to keep adjusting it, it's self, self healing,
link |
01:32:19.120
whatever, plus not perfect, you know, intelligences are never perfect and markets are not perfect.
link |
01:32:24.680
But I do not believe in this ear that you cannot, that you can say, well, our computers
link |
01:32:28.160
are humans are smart, but you know, no markets are not more markets are.
link |
01:32:31.760
So they are intelligent.
link |
01:32:34.080
Now we humans didn't evolve to be markets.
link |
01:32:38.120
We've been participating in them, right, but we are not ourselves a market per se.
link |
01:32:43.280
The neurons could be viewed as the market.
link |
01:32:45.200
You can.
link |
01:32:46.200
There's economic, you know, neuroscience kind of perspective, that's interesting to pursue
link |
01:32:49.440
all that.
link |
01:32:50.440
The point though is, is that if you were to study humans and really be a world's best
link |
01:32:54.160
psychologist, studied for thousands of years and come up with the theory of human intelligence,
link |
01:32:57.440
you might have never discovered principles of markets, you know, spy demand curves and
link |
01:33:02.040
you know, matching and auctions and all that.
link |
01:33:05.000
Those are real principles and they lead to an form of intelligence that's not maybe human
link |
01:33:08.720
intelligence.
link |
01:33:09.720
It's arguably another kind of intelligence.
link |
01:33:11.440
There probably are third kinds of intelligence or fourth that none of us are really thinking
link |
01:33:14.880
too much about right now.
link |
01:33:16.480
So if you really, and then all those are relevant to computer systems in the future, certainly
link |
01:33:20.840
the market one is relevant right now, whereas understanding human intelligence is not so
link |
01:33:25.640
clear that it's relevant right now, probably not.
link |
01:33:29.320
So if you want general intelligence, whatever one means by that or, you know, understanding
link |
01:33:33.160
intelligence in a deep sense and all that, it is definitely has to be not just human
link |
01:33:37.000
intelligence.
link |
01:33:38.000
It's got to be this broader thing.
link |
01:33:39.280
And that's not a mystery.
link |
01:33:40.480
Markets are intelligent.
link |
01:33:41.480
So you know, it's definitely not just a philosophical sense to say, we got to move beyond intelligence,
link |
01:33:45.440
human intelligence.
link |
01:33:46.440
That sounds ridiculous.
link |
01:33:47.440
Yeah.
link |
01:33:48.440
But it's not.
link |
01:33:49.440
And in a block, we'll see to find different kinds of like intelligent infrastructure,
link |
01:33:52.240
II, which I really like.
link |
01:33:54.320
Some of the concept you just been describing, do you see ourselves, we see earth, human
link |
01:34:00.440
civilization as a single organism.
link |
01:34:02.680
Do you think the intelligence of that organism, when you think from the perspective of markets
link |
01:34:07.000
and intelligence infrastructure is increasing?
link |
01:34:10.880
Is it increasing linearly?
link |
01:34:12.360
Is it increasing exponentially?
link |
01:34:14.240
What do you think the future of that intelligence?
link |
01:34:15.960
Yeah, I don't know.
link |
01:34:16.960
I don't tend to think, I don't tend to answer questions like that because, you know, that's
link |
01:34:20.600
science.
link |
01:34:21.600
I was hoping to catch your off guard.
link |
01:34:24.040
Well, again, because you said it's so far in the future, it's fun to ask and you'll
link |
01:34:30.360
probably, you know, like you said, predicting the future is really nearly impossible.
link |
01:34:36.440
But say as an axiom, one day we create a human level, a super human level intelligent, not
link |
01:34:43.680
the scale of markets, but the scale of an individual.
link |
01:34:47.520
What do you think is, what do you think it would take to do that?
link |
01:34:51.760
Or maybe to ask another question is how would that system be different than the biological
link |
01:34:58.880
human beings that we see around us today?
link |
01:35:01.480
Is it possible to say anything interesting to that question or is it just a stupid question?
link |
01:35:05.720
It's not a stupid question, but it's science fiction.
link |
01:35:09.120
And so I'm totally happy to read science fiction and think about it from time to time
link |
01:35:12.120
in my own life.
link |
01:35:13.400
I loved that there was this like brain in a vat kind of, you know, little thing that
link |
01:35:17.080
people were talking about when I was a student.
link |
01:35:18.920
I remember, you know, imagine that, you know, between your brain and your body, there's,
link |
01:35:24.400
you know, there's a bunch of wires, right?
link |
01:35:26.960
And suppose that every one of them was replaced with a literal wire and then suppose that
link |
01:35:32.000
wire was turned into actually a little wireless, you know, there's a receiver and sender.
link |
01:35:36.000
So the brain has got all the senders and receiver, you know, on all of its exiting, you know,
link |
01:35:41.720
axons and all the dendrites down in the body are replaced with senders and receivers.
link |
01:35:45.880
Now you could move the body off somewhere and put the brain in a vat, right?
link |
01:35:50.040
And then you could do things like start killing off those senders receivers one by one.
link |
01:35:54.560
And after you've killed off all of them, where is that person?
link |
01:35:56.920
You know, they thought they were out in the body walking around the world and they moved
link |
01:35:59.640
on.
link |
01:36:00.640
So those are science fiction things.
link |
01:36:01.640
Those are fun to think about.
link |
01:36:02.640
It's just intriguing about where's, what is thought, where is it and all that.
link |
01:36:05.720
And I think every 18 year old, it's to take philosophy classes and think about these things.
link |
01:36:10.640
And I think that everyone should think about what could happen in society that's kind of
link |
01:36:13.400
bad and all that.
link |
01:36:14.400
But I really don't think that's the right thing for most of us that are my age group
link |
01:36:17.560
to be doing and thinking about.
link |
01:36:19.480
I really think that we have so many more present, you know, first challenges and dangers and
link |
01:36:26.720
real things to build and all that, such that, you know, spending too much time on science
link |
01:36:32.320
fiction, at least in public fora like this, I think is, is not what we should be doing.
link |
01:36:36.080
Maybe over beers in private.
link |
01:36:37.600
That's right.
link |
01:36:38.600
I'm well, well, I'm not going to broadcast where I have beers because this is going to
link |
01:36:43.080
go on Facebook.
link |
01:36:44.080
And I know a lot of people showing up there, but yeah, I'll, I love Facebook.
link |
01:36:49.440
Twitter, Amazon, YouTube, I have, I'm optimistic and hopeful, but maybe, maybe I don't have
link |
01:36:56.720
grounds for such optimism and hope.
link |
01:37:00.160
Let me ask you, you've mentored some of the brightest sort of some of the seminal figures
link |
01:37:07.160
in the field.
link |
01:37:08.360
Can you give advice to people who are undergraduates today?
link |
01:37:14.120
What does it take to take, you know, advice on their journey if they're interested in
link |
01:37:17.640
machine learning and AI and, and the ideas of markets from economics and psychology and
link |
01:37:23.720
all the kinds of things that you're exploring?
link |
01:37:25.720
What, what, what steps they take on that journey?
link |
01:37:27.960
Well, yeah, first of all, the door is open and second, it's a journey.
link |
01:37:30.360
I like your language there.
link |
01:37:33.880
It is not that you're so brilliant and you have great brilliant ideas and therefore that's,
link |
01:37:37.680
that's just, you know, that's how you have success or that's how you enter into the field.
link |
01:37:42.440
It's that you apprentice yourself, you, you spend a lot of time, you work on hard things,
link |
01:37:46.960
you try and pull back and you be as broad as you can.
link |
01:37:51.560
You talk to lots of people and it's like entering in any kind of a creative community.
link |
01:37:56.960
There's years that are needed and human connections are critical to it.
link |
01:38:01.560
So, you know, I think about, you know, being a musician or being an artist or something,
link |
01:38:06.040
you don't just, you know, immediately from day one, you know, you're a genius and therefore
link |
01:38:10.600
you do it.
link |
01:38:11.600
So, you know, practice really, really hard on basics and you be humble about where you
link |
01:38:19.040
are and then you realize you'll never be an expert on everything.
link |
01:38:22.160
So, you kind of pick and then there's a lot of randomness and a lot of kind of luck.
link |
01:38:27.680
But luck just kind of picks out which branch of the tree you go down, but you'll go down
link |
01:38:31.600
some branch.
link |
01:38:33.960
So yeah, it's a community.
link |
01:38:35.480
So the graduate school is a, I still think is one of the wonderful phenomena that we
link |
01:38:38.800
have in our, in our world.
link |
01:38:40.800
It's very much about apprenticeship with an advisor.
link |
01:38:43.160
It's very much about a group of people you belong to.
link |
01:38:45.800
It's a four or five year process.
link |
01:38:47.000
So it's plenty of time to start from kind of nothing to come up to something, you know,
link |
01:38:51.760
more expertise and then start to have your own creativity start to flower or even surprise
link |
01:38:55.220
into your own self.
link |
01:38:58.240
And it's a very cooperative endeavor.
link |
01:38:59.760
It's I think a lot of people think of science as highly competitive and I think in some
link |
01:39:05.520
other fields it might be more so.
link |
01:39:08.120
Here it's way more cooperative than you might imagine.
link |
01:39:11.880
And people are always teaching each other something and people are always more than
link |
01:39:14.640
happy to be clear that.
link |
01:39:16.800
So I feel I'm an expert on certain kinds of things, but I'm very much not expert on
link |
01:39:20.600
lots of other things and a lot of them are relevant and a lot of them are, I should know,
link |
01:39:24.200
but it should in some sense, you know, you don't.
link |
01:39:26.320
So I'm always willing to reveal my ignorance to people around me so they can teach me things.
link |
01:39:32.120
And I think a lot of us feel that way about our field.
link |
01:39:34.240
So it's very cooperative.
link |
01:39:35.400
I might add it's also very international because it's so cooperative.
link |
01:39:39.120
We see no barriers and so that the nationalism that you see, especially in the current era
link |
01:39:44.080
and everything is just at odds with the way that most of us think about what we're doing
link |
01:39:47.480
here where this is a human endeavor and we cooperate and are very much trying to do it
link |
01:39:53.080
together for the, you know, the benefit of everybody.
link |
01:39:56.560
So last question, where and how and why did you learn French and which language is more
link |
01:40:02.840
beautiful English or French?
link |
01:40:05.680
Great question.
link |
01:40:06.680
So first of all, I think Italian is actually more beautiful than French and English.
link |
01:40:10.120
And I also speak that.
link |
01:40:11.120
So I'm, I'm, I'm married to an Italian and I have kids and we speak Italian.
link |
01:40:16.000
Anyway, no, all kidding aside, every language allows you to express things a bit differently.
link |
01:40:23.200
And it is one of the great fun things to do in life is to explore those things.
link |
01:40:26.840
So in fact, when I kids or, you know, teens or college kids ask me, what is your study?
link |
01:40:33.480
I say, well, you know, do what your heart, where your heart is, certainly do a lot of
link |
01:40:36.480
math, math is good for everybody, but do some poetry and do some history and do some language
link |
01:40:41.120
too.
link |
01:40:42.120
You know, throughout your life, you'll want to be a thinking person, you'll want to have
link |
01:40:45.360
done that.
link |
01:40:47.560
For me, yeah, French, I learned when I was, I'd say a late teen, I was living in the middle
link |
01:40:54.680
of the country in Kansas and not much was going on in Kansas with all due respect to Kansas.
link |
01:40:59.880
But, and so my parents happened to have some French books on the shelf and just in my boredom,
link |
01:41:04.400
I pulled them down and I found this is fun.
link |
01:41:07.160
And I kind of learned the language by reading.
link |
01:41:09.240
And when I first heard it spoken, I had no idea what was being spoken, but I realized
link |
01:41:13.600
I had somehow knew it from some previous life.
link |
01:41:15.600
And so I made the connection.
link |
01:41:18.560
But then, you know, I traveled and just I love to go beyond my own barriers and my own
link |
01:41:23.160
comfort or whatever, and I found myself in, you know, on trains in France next to say
link |
01:41:27.800
older people who had, you know, lived a whole life of their own and the ability to communicate
link |
01:41:32.280
with them was, was, was, you know, special and ability to also see myself in other people's
link |
01:41:38.560
shoes and have empathy and kind of work on that language as part of that.
link |
01:41:43.120
So, so after that kind of experience and also embedding myself in French culture, which
link |
01:41:49.160
is, you know, quite, quite amazing, you know, languages are rich, not just because there
link |
01:41:52.840
is something inherently beautiful about it, but it's all the creativity that went into
link |
01:41:55.720
it.
link |
01:41:56.720
So I learned a lot of songs, read poems, read books.
link |
01:41:59.960
And then I was here actually at MIT where we're doing the podcast today and a young
link |
01:42:05.360
professor, you know, not yet married and, you know, not having a lot of friends in the
link |
01:42:11.600
area.
link |
01:42:12.600
So I just didn't have, I was kind of a bored person.
link |
01:42:14.000
I said, I heard a lot of Italians around.
link |
01:42:16.040
There's happened to be a lot of Italians at MIT, a lot of Italian professors for some
link |
01:42:18.960
reason.
link |
01:42:19.960
Yeah.
link |
01:42:20.960
And so I was kind of vaguely understanding what they were talking about.
link |
01:42:21.960
I said, well, I should learn this language too.
link |
01:42:23.680
So I did.
link |
01:42:24.680
And then later met my spouse and, you know, Italian became a more important part of my
link |
01:42:29.400
life.
link |
01:42:30.400
But, but I go to China a lot these days.
link |
01:42:32.200
I go to Asia, I go to Europe and every time I go, I kind of am amazed by the richness
link |
01:42:38.200
of human experience and the people don't have any idea if you haven't traveled kind
link |
01:42:42.760
of how amazingly rich and I love the diversity.
link |
01:42:46.880
It's not just a buzzword to me.
link |
01:42:48.080
It really means something.
link |
01:42:49.080
I love the, you know, the, you know, embed myself with other people's experiences.
link |
01:42:53.200
And so, yeah, learning language is a big part of that.
link |
01:42:56.400
I think I've said in some interview at some point that if I had, you know, millions of
link |
01:42:59.840
dollars and infinite time whatever, what would you really work on if you really wanted to
link |
01:43:03.240
do AI?
link |
01:43:04.240
And for me, that is natural language and, and really done right, you know, deep understanding
link |
01:43:08.000
of language.
link |
01:43:09.000
Um, that's to me an amazingly interesting scientific challenge and one we're very far
link |
01:43:14.440
away on one we're very far away, but good natural language people are kind of really
link |
01:43:18.480
invested in that.
link |
01:43:19.480
I think a lot of them see that's where the core of AI is that if you understand that
link |
01:43:22.480
you really help human communication, you understand something about the human mind,
link |
01:43:26.080
the semantics that come out of the human mind and I agree, uh, I think that will be such
link |
01:43:30.320
a long time.
link |
01:43:31.320
So I didn't do that in my career just because I kind of, I was behind in the early days.
link |
01:43:34.720
I didn't kind of know enough of that stuff.
link |
01:43:36.480
I was at MIT.
link |
01:43:37.480
I didn't learn much language, uh, and it was too late at some point to kind of spend a
link |
01:43:41.880
whole career doing that, but I admire that field and, um, and so my little way by learning
link |
01:43:47.640
language, um, you know, kind of, uh, that part of my brain is, um, it's been trained
link |
01:43:52.560
up.
link |
01:43:53.560
Yeah.
link |
01:43:54.560
And it was right.
link |
01:43:55.560
You're truly are the Miles Davis and machine learning.
link |
01:43:57.440
I don't think there's a better place than it was.
link |
01:43:59.480
Mike, it was a huge honor talking to you today.
link |
01:44:01.400
Merci beaucoup.
link |
01:44:02.400
All right.
link |
01:44:03.400
It's been my pleasure.
link |
01:44:04.400
Thank you.
link |
01:44:05.400
Thanks for listening to this conversation with Michael, I, Jordan, and thank you to
link |
01:44:09.240
our presenting sponsor, cash app, download it, use code lex podcast.
link |
01:44:14.520
You get $10 and $10 will go to first an organization that inspires and educates young minds to
link |
01:44:20.480
become science and technology innovators of tomorrow.
link |
01:44:23.920
If you enjoy this podcast, subscribe on YouTube, give it five stars on Apple podcast, support
link |
01:44:28.880
on Patreon, or simply connect with me on Twitter at Lex Friedman.
link |
01:44:34.720
And now let me leave you with some words of wisdom from Michael, I, Jordan from his blog
link |
01:44:39.360
post titled artificial intelligence, the revolution hasn't happened yet calling for broadening
link |
01:44:45.560
the scope of the AI field.
link |
01:44:48.560
We should embrace the fact that what we are witnessing is the creation of a new branch
link |
01:44:52.860
of engineering.
link |
01:44:54.360
The term engineering is often invoked in a narrow sense in academia and beyond with
link |
01:44:59.800
overtones of cold, affect less machinery and negative connotations of loss of control
link |
01:45:05.560
by humans, but an engineering discipline can be what we want it to be in the current
link |
01:45:11.440
era with a real opportunity to conceive of something historically new, a human centric
link |
01:45:17.520
engineering discipline, I will resist giving this emerging discipline a name, but if the
link |
01:45:22.800
acronym AI continues to be used, let's be aware of the very real limitations of this
link |
01:45:28.280
placeholder, let's broaden our scope, tone down the hype and recognize the serious challenges
link |
01:45:35.120
ahead.
link |
01:45:37.320
Thank you for listening and hope to see you next time.