back to index

Scott Aaronson: Computational Complexity and Consciousness | Lex Fridman Podcast #130


small model | large model

link |
00:00:00.000
The following is the conversation with Scott Aronson, his second time in the podcast.
link |
00:00:04.960
He is a professor at UT Austin, director of the Quantum Information Center, and previously a
link |
00:00:11.280
professor at MIT. Last time we talked about quantum computing. This time we talk about
link |
00:00:18.720
computation complexity, consciousness, and theories of everything. I'm recording this intro,
link |
00:00:25.360
as you may be able to tell, in a very strange room in the middle of the night. I'm not really sure
link |
00:00:34.640
how I got here or how I'm going to get out, but Hunters Thompson saying, I think, applies
link |
00:00:42.480
to today and the last few days and actually the last couple of weeks. Life should not be a journey
link |
00:00:50.240
to the grave with the intention of arriving safely in a pretty and well preserved body,
link |
00:00:55.280
but rather to skid in broadside in a cloud of smoke, thoroughly used up, totally worn out,
link |
00:01:03.360
and loudly proclaiming, wow, what a ride. So I figured whatever I'm up to here,
link |
00:01:11.360
and yes, lots of wine is involved. I'm going to have to improvise, hence this recording.
link |
00:01:17.280
Okay, quick mention of each sponsor, followed by some thoughts related to the episode.
link |
00:01:23.200
First sponsor is Simply Safe, a home security company I use to monitor and protect my apartment,
link |
00:01:29.360
though, of course, I'm always prepared with a fallback plan, as a man in this world must always be.
link |
00:01:38.800
Second sponsor is Eight Sleep, a mattress that cools itself, measures heart rate variability,
link |
00:01:46.080
has a nap, and has given me yet another reason to look forward to sleep, including
link |
00:01:52.240
the all important power nap. Third sponsor is ExpressVPN, the VPN I've used for many years
link |
00:01:59.200
to protect my privacy on the internet. Finally, the fourth sponsor is BetterHelp, online therapy
link |
00:02:06.880
when you want to face your demons with a licensed professional, not just by doing David Goggins,
link |
00:02:12.640
like physical challenges like I seem to do on occasion. Please check out these sponsors in
link |
00:02:18.160
the description to get a discount and to support the podcast. As a side note, let me say that this
link |
00:02:24.640
is the second time I've recorded a conversation outdoors. The first one was with Stephen Wolfram,
link |
00:02:30.960
when it was actually sunny out. In this case, it was raining, which is why I found a covered
link |
00:02:36.160
outdoor patio. But I learned a valuable lesson, which is that raindrops can be quite loud on the
link |
00:02:43.360
hard metal surface of a patio cover. I did my best with the audio. I hope it still sounds okay to you.
link |
00:02:51.280
I'm learning, always improving. In fact, as Scott says, if you always win, then you're
link |
00:02:57.920
probably doing something wrong. To be honest, I get pretty upset with myself when I fail,
link |
00:03:02.400
small, or big. But I've learned that this feeling is priceless. It can be fuel when
link |
00:03:09.840
channeled into concrete plans of how to improve. So if you enjoy this thing, subscribe on YouTube,
link |
00:03:18.080
review 5,000 Apple Podcasts, follow on Spotify, support on Patreon, or connect with me on Twitter
link |
00:03:24.480
at Lex Friedman. And now here's my conversation with Scott Aronson. Let's start with the most absurd
link |
00:03:32.560
question, but I've read you write some fascinating stuff about it. So let's go there. Are we living
link |
00:03:37.760
in a simulation? What difference does it make, Lex? I mean, I'm serious. What difference? Because
link |
00:03:44.080
if we are living in a simulation, it raises the question, how real does something have to be
link |
00:03:50.240
in simulation for it to be sufficiently immersive for us humans? But I mean, even in principle,
link |
00:03:55.920
how could we ever know if we were in one? A perfect simulation by definition is something
link |
00:04:01.200
that's indistinguishable from the real thing. Well, we didn't say anything about perfect.
link |
00:04:04.960
No, no, that's right. Well, if it was an imperfect simulation, if we could hack it,
link |
00:04:10.560
find a bug in it, then that would be one thing. If this was like the matrix and there was a way
link |
00:04:16.240
for me to do flying kung fu moves or something by hacking the simulation, well, then we would
link |
00:04:22.640
have to cross that bridge when we came to it, wouldn't we? I mean, at that point, it's hard
link |
00:04:29.680
to see the difference between that and just what people would ordinarily refer to as a world with
link |
00:04:35.680
miracles. What about from a different perspective, thinking about the universe as a computation,
link |
00:04:41.840
like a program running on a computer? Is that kind of a neighboring concept? It is. It is an
link |
00:04:46.960
interesting and reasonably well defined question to ask, is the world computable? Does the world
link |
00:04:53.680
satisfy what we would call in CS, the church touring thesis, that is, could we take any physical
link |
00:05:01.360
system and simulate it to any desired precision by a touring machine, given the appropriate input
link |
00:05:09.680
data? And so far, I think the indications are pretty strong that our world does seem to satisfy
link |
00:05:16.800
the church touring thesis. At least if it doesn't, then we haven't yet discovered why not. But now,
link |
00:05:23.920
does that mean that our universe is a simulation? Well, that word seems to suggest that there is
link |
00:05:30.880
some other larger universe in which it is running. And the problem there is that if the simulation is
link |
00:05:37.760
perfect, then we're never going to be able to get any direct evidence about that other universe.
link |
00:05:44.240
We will only be able to see the effects of the computation that is running in this universe.
link |
00:05:50.240
Well, let's imagine an analogy. Let's imagine a PC, a personal computer, a computer.
link |
00:05:57.680
Is it possible with the advent of artificial intelligence for the computer to look outside
link |
00:06:04.960
of itself to understand its creator? Is that a ridiculous connect analogy?
link |
00:06:12.320
Well, with the computers that we actually have, first of all, we all know that humans have done
link |
00:06:20.640
an imperfect job of enforcing the abstraction boundaries of computers. You may try to confine
link |
00:06:28.720
some program to a playpen, but as soon as there's one memory allocation error in the C program,
link |
00:06:37.280
then the program has gotten out of that playpen and it can do whatever it wants. This is how most
link |
00:06:42.640
hacks work, viruses and worms and exploits. And you would have to imagine that an AI would be able
link |
00:06:50.800
to discover something like that. Now, of course, if we could actually discover some exploit of
link |
00:06:57.600
reality itself, then in some sense, we wouldn't have to philosophize about this. This would no
link |
00:07:07.120
longer be a metaphysical conversation. But the question is, what would that hack look like?
link |
00:07:14.960
Yeah, well, I have no idea. I mean, Peter Shore, the very famous person in quantum computing,
link |
00:07:23.040
of course, has joked that maybe the reason why we haven't yet integrated general relativity in
link |
00:07:30.160
quantum mechanics is that the part of the universe that depends on both of them was actually left
link |
00:07:36.240
unspecified. And if we ever tried to do an experiment involving the singularity of a black
link |
00:07:42.640
hall or something like that, then the universe would just generate an overflow error or something.
link |
00:07:48.240
A blue screen of death. Yeah, we would just crash the universe. Now, the universe seemed to hold
link |
00:07:56.480
up pretty well for 14 billion years. So my Occam's razor kind of guess has to be that it will continue
link |
00:08:08.560
to hold up that the fact that we don't know the laws of physics governing some phenomenon
link |
00:08:14.480
is not a strong sign that probing that phenomenon is going to crash the universe.
link |
00:08:20.560
But of course, I could be wrong. But do you think on the physics side of things,
link |
00:08:25.600
there's been recently a few folks, Eric Weinstein and Stephen Wolfram, that came out with the
link |
00:08:32.400
theory of everything. I think there's a history of physicists dreaming and working on the unification
link |
00:08:38.880
of all the laws of physics. Do you think it's possible that once we understand more physics,
link |
00:08:45.440
not necessarily the unification of the laws, but just understand physics more deeply at the
link |
00:08:49.680
fundamental level, we'll be able to start part of this as humorous, but looking to see if there's
link |
00:08:58.400
any bugs in the universe that could be exploited for traveling at not just speed of light, but
link |
00:09:06.080
just traveling faster than our current spaceships can travel, all that kind of stuff.
link |
00:09:11.120
Well, I mean, to travel faster than our current spaceships could travel, you wouldn't need to
link |
00:09:15.920
find any bug in the universe. The known laws of physics let us go much faster up to the speed of
link |
00:09:22.240
light. And when people want to go faster than the speed of light, well, we actually know something
link |
00:09:28.240
about what that would entail, namely that according to relativity, that seems to entail
link |
00:09:34.400
communication backwards in time. So then you have to worry about closed time like curves and all
link |
00:09:40.560
of that stuff. So in some sense, we know the price that you have to pay for these things.
link |
00:09:48.880
That's right. That's right. We can't say that they're impossible, but we know that a lot
link |
00:09:55.440
else in physics breaks. So now regarding Eric Weinstein and Stephen Wolfram, I wouldn't say
link |
00:10:03.840
that either of them has a theory of everything. I would say that they have ideas that they hope
link |
00:10:09.680
could someday lead to a theory of everything. Is that a worthy pursuit?
link |
00:10:13.760
Well, I mean, certainly, let's say by theory of everything, we don't literally mean a theory of
link |
00:10:19.920
cats and of baseball, but we just mean it in the more limited sense of everything,
link |
00:10:26.240
a fundamental theory of physics, of all of the fundamental interactions of physics.
link |
00:10:32.880
Of course, such a theory, even after we had it, would leave the entire question of all
link |
00:10:39.680
the emergent behavior to be explored. So it's only everything for a specific definition of
link |
00:10:47.680
everything. But in that sense, I would say, of course, that's worth pursuing. I mean,
link |
00:10:52.000
that is the entire program of fundamental physics. All of my friends who do quantum gravity,
link |
00:10:58.160
who do string theory, who do anything like that, that is what's motivating them.
link |
00:11:02.880
Yeah, it's funny, though, but Eric Weinstein talks about this. I don't know much about the
link |
00:11:07.920
physics world, but I know about the AI world. It is a little bit taboo to talk about AGI,
link |
00:11:16.320
for example, on the AI side. So really, to talk about the big dream of the community, I would say,
link |
00:11:24.880
because it seems so far away, it's almost taboo to bring it up, because it's seen as the kind of
link |
00:11:31.040
people that dream about creating a truly superhuman level of intelligence that's really far out there,
link |
00:11:37.200
people because we're not even close to that. And it feels like the same thing is true for
link |
00:11:41.360
the physics community. I mean, Stephen Hawking certainly talked constantly about theory of
link |
00:11:47.840
everything. People use those terms who were some of the most respected people in the whole world
link |
00:11:57.680
of physics. But I think that the distinction that I would make is that people might react badly if
link |
00:12:04.640
you use the term in a way that suggests that you thinking about it for five minutes have come up
link |
00:12:11.440
with this major new insight about it. It's difficult. Stephen Hawking is not a great example,
link |
00:12:18.640
because I think you can do whatever the heck you want when you get to that level. And I certainly
link |
00:12:25.600
see seeing your faculty. At that point, one of the nice things about getting older is you
link |
00:12:33.440
stop giving a damn. But community as a whole, they tend to roll their eyes very quickly at
link |
00:12:39.120
stuff that's outside the quote unquote mainstream. Well, let me put it this way. If you asked Ed
link |
00:12:44.960
Whitten, let's say, who is, you might consider a leader of the string community and thus very,
link |
00:12:50.560
very mainstream in a certain sense, but he would have no hesitation in saying, of course,
link |
00:12:56.960
they're looking for a unified description of nature, of general relativity, of quantum mechanics,
link |
00:13:06.640
of all the fundamental interactions of nature, right? Now, whether people would call that a
link |
00:13:12.880
theory of everything, whether they would use that term, that might vary. Lenny Susskin would
link |
00:13:18.480
definitely have no problem telling you that if that's what we want, right?
link |
00:13:22.400
For me who loves human beings and psychology, it's kind of ridiculous to say a theory that
link |
00:13:30.400
unifies the laws of physics gets you to understand everything. I would say you're not even close to
link |
00:13:35.680
understanding everything. Yeah, right. Well, yeah, I mean, the word everything is a little
link |
00:13:40.320
ambiguous here, right? Because, you know, and then people will get into debates about, you know,
link |
00:13:45.040
reductionism versus emergentism and blah, blah, blah. And so in not wanting to say theory of
link |
00:13:51.920
everything, people might just be trying to short circuit that debate and say, you know, look,
link |
00:13:56.720
you know, yes, we want a fundamental theory of, you know, the particles and interactions of nature.
link |
00:14:02.560
Let me bring up the next topic that people don't want to mention, although they're getting more
link |
00:14:05.920
comfortable with it is consciousness. You mentioned that you have a talk on consciousness
link |
00:14:10.240
that they watched five minutes of, but the internet connection is really bad. Was this
link |
00:14:14.480
my talk about, you know, refuting the integrated information theory? Yes, it might have been.
link |
00:14:18.880
Which was this particular account of consciousness that, yeah, I think one can just show it doesn't
link |
00:14:23.040
work. So let me... Much harder to say what does work. What does work, yeah. Yeah. Let me ask,
link |
00:14:27.760
maybe it'd be nice to comment on... You talk about also like the semi hard problem of consciousness,
link |
00:14:34.720
or like almost hard problem or kind of hard... Pretty hard problem, I think I call it.
link |
00:14:38.640
So maybe can you talk about that, their idea of the approach to modeling consciousness and why
link |
00:14:47.440
you don't find it convincing? What is it first of all? Okay, well, so what I called the pretty hard
link |
00:14:53.680
problem of consciousness, this is my term, although many other people have said something
link |
00:14:58.160
equivalent to this, okay. But it's just, you know, the problem of, you know, giving an account of
link |
00:15:07.360
just which physical systems are conscious and which are not. Or, you know, if there are degrees of
link |
00:15:12.720
consciousness, then quantifying how conscious a given system is. Oh, awesome. So that's the
link |
00:15:18.560
pretty hard... Yeah, that's what I mean. That's it. I'm adopting it. I love it. That's a good
link |
00:15:23.040
ring to it. And so, you know, the infamous hard problem of consciousness is to explain how something
link |
00:15:29.440
like consciousness could arise at all, you know, in a material universe, right? Or, you know,
link |
00:15:34.560
why does it ever feel like anything to experience anything, right? And, you know, so I'm trying
link |
00:15:40.720
to distinguish from that problem, right? And say, you know, okay, I would merely settle for an account
link |
00:15:47.520
that could say, you know, is a fetus conscious, you know, if so, which trimester, you know, is a
link |
00:15:54.880
dog conscious, you know, what about a frog, right? Or even as a precondition, you take that both
link |
00:16:00.560
these things are conscious, tell me which is more conscious. Yeah, for example, yes. Yeah, yeah.
link |
00:16:06.160
I mean, if consciousness is some multidimensional vector, well, just tell me in which respects
link |
00:16:11.360
these things are conscious and in which respect they aren't, right? And, you know, and have some
link |
00:16:16.320
principled way to do it where you're not, you know, carving out exceptions for things that you like
link |
00:16:21.600
or don't like, but could somehow take a description of an arbitrary physical system, and then just
link |
00:16:27.920
based on the physical properties of that system, or the informational properties or how it's connected
link |
00:16:35.360
or something like that, just in principle, calculate, you know, its degree of consciousness,
link |
00:16:40.960
right? I mean, this would be the kind of thing that we would need, you know, if we wanted to
link |
00:16:46.240
address questions like, you know, what does it take for a machine to be conscious, right,
link |
00:16:51.360
or when should we regard AIs as being conscious? So now this IIT, this integrated information
link |
00:17:00.800
theory, which has been put forward by Giulio Tinoni and a bunch of his collaborators over the last
link |
00:17:11.760
decade or two. This is noteworthy, I guess, as a direct attempt to answer that question,
link |
00:17:19.360
to, you know, answer the, to address the pretty hard problem, right? And they give a criterion
link |
00:17:26.080
that's just based on how a system is connected. So it's up to you to sort of abstract a system
link |
00:17:32.960
like a brain or a microchip as a collection of components that are connected to each other by
link |
00:17:39.200
some pattern of connections, you know, and to specify how the components can influence each
link |
00:17:45.440
other, you know, like where the inputs go, you know, where they affect the outputs. But then
link |
00:17:50.160
once you've specified that, then they give this quantity that they call phi, you know, the Greek
link |
00:17:55.600
letter phi. And the definition of phi has actually changed over time. It changes from one paper to
link |
00:18:02.160
another. But in all of the variations, it involves something about what we in computer science would
link |
00:18:08.880
call graph expansion. So basically what this means is that they want, in order to get a large value
link |
00:18:15.760
of phi, it should not be possible to take your system and partition it into two components
link |
00:18:23.120
that are only weekly connected to each other. Okay. So whenever we take our system and sort of
link |
00:18:29.200
try to split it up into two, then there should be lots and lots of connections going between the
link |
00:18:34.080
two components. Okay, well, I understand what that means on a graph. Do they formalize what,
link |
00:18:40.160
how to construct such a graph or data structure, whatever, or is this one
link |
00:18:45.680
of the criticism I've heard you kind of say is that a lot of the very interesting specifics are
link |
00:18:51.440
usually communicated through like natural language, like, like through words. So it's like the details
link |
00:18:58.080
aren't always clear. Well, they, well, it's true. I mean, they, they, they have nothing even resembling
link |
00:19:03.360
a derivation of this phi. Okay. So what they do is they state a whole bunch of postulates,
link |
00:19:09.920
you know, axioms that they think that consciousness should satisfy. And then there's some verbal
link |
00:19:15.440
discussion. And then at some point, phi appears, right? And this, this was the first thing that
link |
00:19:21.520
really made the hair stand on my neck, to be honest, because they are acting as if there's a
link |
00:19:26.640
derivation, they're acting as if, you know, you're supposed to think that this is a derivation. And
link |
00:19:31.440
there's nothing even remotely resembling a derivative, they just pull the phi out of a hat
link |
00:19:36.800
completely is one of the key criticisms to use that details are missing or is there something
link |
00:19:41.200
more fun to mention? That's not, that's not even the key criticism. That's just, that's just a side
link |
00:19:44.640
point. Okay. The, the core of it is that I think that the, you know, that they want to say that
link |
00:19:50.000
a system is more conscious, the larger its value of phi. And I think that that is obvious nonsense.
link |
00:19:56.720
Okay. As soon as you think about it for like a minute, as soon as you think about it in terms of
link |
00:20:01.680
could I construct a system that had an enormous value of phi, like, you know, even larger than
link |
00:20:07.760
the brain has, but that is just implementing an error correcting code, you know, doing nothing
link |
00:20:13.680
that we would associate with, you know, intelligence or consciousness or any of it. The answer is,
link |
00:20:19.520
yes, it is easy to do that. Right. And so I wrote blog posts just making this point that, yeah,
link |
00:20:25.200
it's easy to do that. Now, you know, Tenoni's response to that was actually kind of incredible.
link |
00:20:30.720
Right. I mean, I, I admired it in a way because instead of disputing any of it,
link |
00:20:35.920
he just bit the bullet in the sense, you know, he was one of the, the, the most audacious bullet
link |
00:20:42.640
bitings I've ever seen in my career. Okay. He said, okay, then fine, you know, this system that
link |
00:20:49.600
just applies this error correcting code, it's conscious, you know, and if it has a much larger
link |
00:20:54.400
value of phi than you or me, it's much more conscious than you or me. You know, we just have
link |
00:21:00.160
to accept what the theory says because, you know, science is not about confirming our intuitions.
link |
00:21:05.600
It's about challenging them. And, you know, this is what my theory predicts that this thing is
link |
00:21:10.640
conscious and, you know, or super duper conscious and how are you going to prove me wrong?
link |
00:21:16.320
So the way I would argue against your blog post is I would say, yes, sure, you're right in general,
link |
00:21:22.800
but for naturally arising systems developed through the process of evolution on earth,
link |
00:21:29.280
the, this rule of the larger fee being associated with more consciousness is correct.
link |
00:21:34.480
Yeah. So that's not what he said at all. Right. Right. Because he wants this to be completely
link |
00:21:39.200
general. Right. So we can apply it to even computers. Yeah. I mean, I mean, the whole
link |
00:21:43.040
interest of the theory is the, you know, the hope that it could be completely general, apply to
link |
00:21:48.160
aliens, to computers, to animals, coma patients, to any of it. Right. Yeah. And so, so, so he just
link |
00:21:58.080
said, well, you know, Scott is relying on his intuition, but, you know, I'm relying on this theory.
link |
00:22:04.640
And, you know, to me, it was almost like, you know, are we being serious here? Like, like, like,
link |
00:22:10.960
you know, like, like, okay, yes, in science, we try to learn highly nonintuitive things. But
link |
00:22:16.800
what we do is we first test the theory on cases where we already know the answer. Right. Like,
link |
00:22:23.200
if we, if someone had a new theory of temperature, right, then, you know, maybe we could check that
link |
00:22:28.320
it says that boiling water is hotter than ice. And then if it says that the sun is hotter than
link |
00:22:34.240
anything, you know, you've ever experienced, then maybe we, we trust that extrapolation. Right.
link |
00:22:40.480
But like this, this theory, like, if, if, you know, it's now saying that, you know, a, a gigantic
link |
00:22:47.840
grit, like regular grid of exclusive orgates can be way more conscious than a, you know, a person
link |
00:22:54.800
or than, than any animal can be, you know, even if it, you know, is, you know, is, is, is so uniform
link |
00:23:01.920
that it might as well just be a blank wall. Right. And, and so now the point is, if this theory is
link |
00:23:07.680
sort of getting wrong, the question is a blank wall, you know, more conscious than a person,
link |
00:23:12.960
then I would say what is, what is there for it to get right? So your sense is a blank wall
link |
00:23:19.680
is not more conscious than a human being. Yeah. I mean, I mean, I mean, you could say that I am
link |
00:23:24.080
taking that as one of my axioms. I'm saying, I'm saying that if, if a theory of consciousness
link |
00:23:30.880
is, is getting that wrong, then whatever it is talking about at that point, I, I, I'm not going
link |
00:23:38.560
to call it consciousness. I'm going to use a different word. You have to use a different word.
link |
00:23:41.680
I mean, it's also, it's possible, just like with intelligence, that us humans conveniently
link |
00:23:46.720
define these very difficult to understand concepts in a very human centric way.
link |
00:23:50.800
Just like the Turing test really seems to define intelligence as a thing that's human like.
link |
00:23:56.880
Right. But I would say that with any concept, you know, there's,
link |
00:24:03.200
you know, like we, we, we first need to define it, right. And a definition is only a good definition
link |
00:24:09.040
if it matches what we thought we were talking about, you know, prior to having a definition,
link |
00:24:13.520
right. And I would say that, you know, fee as a definition of consciousness fails that test.
link |
00:24:21.200
That is my argument. So, okay. Then let's, so let's take a further step. So you mentioned that
link |
00:24:26.480
the universe might be a Turing machine. So like it might be computations or simulatable by one
link |
00:24:32.160
anyway. Simulatable by one. So do you, what's your sense about consciousness? Do you think
link |
00:24:38.240
consciousness is computation that we don't need to go to any place outside of the computable universe
link |
00:24:46.080
to, you know, to, to understand consciousness, to build consciousness, to measure consciousness,
link |
00:24:52.800
all those kinds of things. I don't know. These are what, you know, have been called the, the
link |
00:24:58.080
vertiginous questions, right? There's the questions like, like, you know, that you get a feeling of
link |
00:25:03.520
vertigo and thinking about them, right? I mean, I certainly feel like I am conscious in a way that
link |
00:25:10.480
is not reducible to computation. But why should you believe me? Right? I mean, and, and, and if you
link |
00:25:16.800
said the same to me, then why should I believe you? But as computer scientists, I feel like a
link |
00:25:22.960
computer could be intelligent, could achieve human level intelligence. But, and that's actually a
link |
00:25:30.000
feeling and a hope. That's not a scientific belief. It's just we've built up enough intuition, the
link |
00:25:35.040
same kind of intuition you use in your blog. You know, that's what scientists do. They, I mean,
link |
00:25:39.520
some of it is a scientific method, but some of it is just damn good intuition. I don't have a good
link |
00:25:44.320
intuition about consciousness. Yeah, I'm not sure that anyone does or has in the, you know,
link |
00:25:49.840
2,500 years that these things have been discussed, Lex. But do you think we will? Like one of the,
link |
00:25:55.760
I got a chance to attend, can't wait to hear your opinion on this, but attend the Neuralink event.
link |
00:26:01.920
And one of the dreams there is to, you know, basically push neuroscience forward. And the
link |
00:26:07.600
hope in neuroscience is that we can inspect the machinery from which all this fun stuff emerges
link |
00:26:15.520
and see, are we going to notice something special, some special sauce from which something like
link |
00:26:20.480
consciousness or cognition emerges? Yeah, well, it's clear that we've learned an enormous amount
link |
00:26:25.520
about neuroscience. We've learned an enormous amount about computation, you know, about machine
link |
00:26:31.120
learning, about AI, how to get it to work. We've learned an enormous amount about the
link |
00:26:37.760
underpinnings of the physical world, you know, and, you know, from one point of view, that's like
link |
00:26:43.920
an enormous distance that we've traveled along the road to understanding consciousness.
link |
00:26:48.720
From another point of view, you know, the distance still to be traveled on the road,
link |
00:26:52.560
you know, maybe seems no shorter than it was at the beginning, right? So it's very hard to say.
link |
00:26:58.240
I mean, you know, these are questions like, like in sort of trying to have a theory of consciousness,
link |
00:27:04.000
there's sort of a problem where it feels like it's not just that we don't know how to make
link |
00:27:08.480
progress, it's that it's hard to specify what could even count as progress, right? Because no
link |
00:27:13.920
matter what scientific theory someone proposed, someone else could come along and say, well,
link |
00:27:18.800
you've just talked about the mechanism, you haven't said anything about what breathes fire into the
link |
00:27:24.320
mechanism, what really makes there's something that it's like to be it, right? And that seems
link |
00:27:28.640
like an objection that you could always raise. Yes. No matter, you know, how much someone elucidated
link |
00:27:34.000
the details of how the brain works. Okay, let's go to the Turing Test and Lobner Prize. I have this
link |
00:27:38.400
intuition, call me crazy, but we, that a machine to pass the Turing Test and it's full, whatever
link |
00:27:46.880
the spirit of it is, we can talk about how to formulate the perfect Turing Test, that that
link |
00:27:52.000
machine has to be conscious, or we at least have to, I have a very low bar of what consciousness is.
link |
00:28:00.800
I tend to, I tend to think that the emulation of consciousness is as good as consciousness.
link |
00:28:07.120
So like consciousness is just a dance, a social, a social shortcut, like a nice useful tool.
link |
00:28:14.640
But I tend to connect intelligence and consciousness together. So by that, do you
link |
00:28:21.120
maybe just to ask what role does consciousness play? Do you think it passed in the Turing Test?
link |
00:28:27.520
Well, look, I mean, it's almost tautologically true that if we had a machine that passed the
link |
00:28:32.080
Turing Test, then it would be emulating consciousness, right? So if your position is that,
link |
00:28:37.200
you know, emulation of consciousness is consciousness, then, you know, by definition,
link |
00:28:42.640
any machine that passed the Turing Test would be conscious. But it's, but I mean,
link |
00:28:47.840
you know, that you could say that, you know, that that is just a way to rephrase the original
link |
00:28:51.680
question, you know, is an emulation of consciousness, you know, necessarily conscious,
link |
00:28:56.480
right? And you can, you know, here, I'm not saying anything new that hasn't been
link |
00:29:01.120
debated ad nauseam in the literature. Okay, but, you know, you could imagine some very hard cases,
link |
00:29:07.360
like imagine a machine that passed the Turing Test, but they did so just by an enormous
link |
00:29:13.360
cosmological sized lookup table that just cached every possible conversation that could be had.
link |
00:29:19.840
The old Chinese room. Well, yeah, yeah, but this is, I mean, I mean,
link |
00:29:24.480
the Chinese room actually would be doing some computation, at least in Searle's version, right?
link |
00:29:29.200
Here, I'm just talking about a table lookup. Okay, now, it's true that for conversations
link |
00:29:34.160
of a reasonable length, this, you know, lookup table would be so enormous, it wouldn't even
link |
00:29:38.800
fit in the observable universe. Okay, but supposing that you could build a big enough
link |
00:29:43.360
lookup table and then just, you know, pass the Turing Test just by looking up what the person
link |
00:29:49.600
said, right? Are you going to regard that as conscious? Okay, let me try to make this formal,
link |
00:29:55.760
and then you can shout it down. I think that the emulation of something is that something,
link |
00:30:02.880
if there exists in that system, a black box, that's full of mystery. So like,
link |
00:30:09.840
full of mystery to whom? To human inspectors. So does that mean that consciousness is
link |
00:30:15.520
relative to the observer? Like, could something be conscious for us, but not conscious for an
link |
00:30:20.560
alien that understood better what was happening inside the black box? Yes, yes. So that if inside
link |
00:30:26.080
the black box is just a lookup table, the alien that saw that would say this is not conscious,
link |
00:30:31.120
to us, another way to phrase the black box is layers of abstraction,
link |
00:30:36.000
which make it very difficult to see to actually underlying functionality of the system.
link |
00:30:40.560
And then we observe just the abstraction. And so it looks like magic to us. But once we
link |
00:30:45.680
understand the inner machinery, it stops being magic. And so like, that's a prerequisite is
link |
00:30:52.160
that you can't know how it works, some part of it. Because then there has to be in our human mind,
link |
00:30:58.160
an entry point for the magic. So that's a formal definition of the system.
link |
00:31:05.520
Yeah, well, look, I mean, I explored a view in this essay I wrote called The Ghost and the
link |
00:31:10.640
Quantum Turing Machine seven years ago, that is related to that, except that I did not want to
link |
00:31:17.200
have consciousness be relative to the observer, right? Because I think that, you know, if
link |
00:31:21.440
consciousness means anything, it is something that is experienced by the entity that is
link |
00:31:26.080
conscious, right? You know, like, I don't need you to tell me that I'm conscious, right? Nor do you
link |
00:31:31.520
need me to tell you that you are, right? So, but basically what I explored there is, you know,
link |
00:31:40.480
are there aspects of a system like a brain that just could not be predicted, even with
link |
00:31:49.040
arbitrarily advanced future technologies? It's because of chaos combined with quantum
link |
00:31:54.640
mechanical uncertainty, you know, things like that. I mean, that actually could be a property of the
link |
00:32:01.520
brain, you know, if true, that would distinguish it in a principled way, at least from any currently
link |
00:32:07.360
existing computer, not from any possible computer, but from, yeah, yeah.
link |
00:32:11.360
This is a thought experiment. So if I gave you information that you're in the entire history
link |
00:32:18.000
of your life, basically explain away free will with a lookup table, say that this was all predetermined,
link |
00:32:26.080
that everything you experienced has already been predetermined. Wouldn't that take away
link |
00:32:29.920
your consciousness? Wouldn't you yourself, wouldn't experience of the world change for you in a way
link |
00:32:35.360
that you can't take back? Well, let me put it this way, if you could do like in a Greek tragedy,
link |
00:32:41.840
where, you know, you would just write down a prediction for what I'm going to do, and then
link |
00:32:46.640
maybe you put the prediction in a sealed box, and maybe, you know, you open it later, and you
link |
00:32:53.040
show that you knew everything I was going to do, or, you know, of course, the even creepier version
link |
00:32:57.840
would be you tell me the prediction, and then I try to falsify it, my very effort to falsify it
link |
00:33:03.280
makes it come true, right? Let's, you know, let's even forget that, you know, that version as convenient
link |
00:33:09.520
as it is for fiction writers, right? Let's just let's just do the version where you put the prediction
link |
00:33:14.080
into a sealed envelope, okay? But if you could reliably predict everything that I was going to do,
link |
00:33:20.880
I'm not sure that that would destroy my sense of being conscious, but I think it really would
link |
00:33:25.680
destroy my sense of having free will, you know, and much, much more than any philosophical conversation
link |
00:33:32.800
could possibly do that, right? And so I think it becomes extremely interesting to ask, you know,
link |
00:33:39.680
could such predictions be done, you know, even in principle, is it consistent with the laws of
link |
00:33:44.800
physics to make such predictions, to get enough data about someone that you could actually generate
link |
00:33:50.640
such predictions without having to kill them in the process to, you know, slice their brain up into
link |
00:33:55.920
little slivers or something. I mean, theoretically possible, right? Well, I don't know. I mean,
link |
00:34:00.480
it might be possible, but only at the cost of destroying the person, right? I mean, it depends
link |
00:34:05.840
on how low you have to go in sort of the substrate. Like if there was a nice digital abstraction layer,
link |
00:34:13.520
if you could think of each neuron as a kind of transistor computing a digital function,
link |
00:34:18.800
then you could imagine some nanorobots that would go in and we just scan the state of each
link |
00:34:24.160
transistor, you know, of each neuron, and then, you know, make a good enough copy, right? But if it
link |
00:34:30.800
was actually important to get down to the molecular or the atomic level, then, you know, eventually
link |
00:34:36.800
you would be up against quantum effects. You would be up against the unclonability of quantum
link |
00:34:41.360
states. So I think it's a question of how good of a replica, how good does the replica have to be
link |
00:34:48.160
before you're going to count it as actually a copy of you or as being able to predict your actions?
link |
00:34:54.240
That's a totally open question. Yeah, yeah, yeah. And especially once we say that, well,
link |
00:34:59.920
look, maybe there's no way to make a deterministic prediction because, you know, we know that there's
link |
00:35:06.880
noise buffeting the brain around, presumably even quantum mechanical uncertainty, you know,
link |
00:35:12.560
affecting the sodium ion channels, for example, whether they open or they close,
link |
00:35:18.640
you know, there's no reason why over a certain time scale that shouldn't be amplified just like
link |
00:35:24.800
we imagine happens with the weather or with any other, you know, chaotic system. So if that stuff
link |
00:35:34.000
is important, right, then, then, you know, we would say, well, you know, you can't,
link |
00:35:42.160
you know, you're never going to be able to make an accurate enough copy. But now the hard part is,
link |
00:35:47.120
well, what if someone can make a copy that sort of no one else can tell apart from you, right?
link |
00:35:52.320
It says the same kinds of things that you would have said, maybe not exactly the same things,
link |
00:35:58.000
because we agree that there's noise, but it says the same kinds of things. And maybe you alone
link |
00:36:03.200
would say, no, I know that that's not me, you know, it's, it doesn't share my, I haven't felt my
link |
00:36:08.720
consciousness leap over to that other thing. I still feel it localized in this version,
link |
00:36:13.920
right? Then why should anyone else believe you? What are your thoughts? I'd be curious,
link |
00:36:18.480
you're a really good person to ask, which is Penrose's, Roger Penrose's work on consciousness,
link |
00:36:24.720
saying that there, you know, there is some with axons and so on. There might be some
link |
00:36:29.680
biological places where quantum mechanics can come into play and through that create
link |
00:36:34.560
consciousness somehow. Yeah. Okay. Well, I'm familiar with this work. Of course. You know,
link |
00:36:39.440
I read Penrose's books as a teenager. They had a huge impact on me. Five or six years ago,
link |
00:36:45.840
I had the privilege to actually talk these things over with Penrose, you know, at some length at
link |
00:36:50.080
a conference in Minnesota. And, you know, he is, you know, an amazing personality. I admire the
link |
00:36:57.600
fact that he was even raising such audacious questions at all. But, you know, to answer your
link |
00:37:04.080
question, I think the first thing we need to get clear on is that he is not merely saying that
link |
00:37:09.680
quantum mechanics is relevant to consciousness, right? That would be like, you know, that would be
link |
00:37:15.280
tame compared to what he is saying, right? He is saying that, you know, even quantum mechanics
link |
00:37:20.880
is not good enough, right? Because if supposing, for example, that the brain were a quantum computer,
link |
00:37:26.320
maybe that's still a computer, you know, in fact, a quantum computer can be simulated by an ordinary
link |
00:37:32.080
computer. It might merely need exponentially more time in order to do so, right? So that's simply
link |
00:37:37.520
not good enough for him. Okay, so what he wants is for the brain to be a quantum gravitational
link |
00:37:44.080
computer. Or he wants the brain to be exploiting as yet unknown laws of quantum gravity, okay,
link |
00:37:53.120
which would, which would be uncomputable. Uncomputable. That's the key point. Okay, yes, yes.
link |
00:37:58.080
That would be literally uncomputable. And I've asked him, you know, to clarify this, but
link |
00:38:03.280
uncomputable, even if you had an oracle for the halting problem, or, you know, and, or, you know,
link |
00:38:10.080
as high up as you want to go in the sort of high, the usual hierarchy of uncomputability,
link |
00:38:15.600
he wants to go beyond all of that. Okay, so, so, you know, just to be clear, like, you know,
link |
00:38:21.120
if we're keeping count of how many speculations, you know, there's probably like at least five
link |
00:38:26.160
or six of them, right? There's first of all, that there is some quantum gravity theory that
link |
00:38:30.800
would involve this kind of uncomputability, right? Most people who study quantum gravity
link |
00:38:35.760
would not agree with that. They would say that what we've learned, you know, what little we
link |
00:38:40.480
know about quantum gravity from the, this ADSCFT correspondence, for example, has been very much
link |
00:38:47.200
consistent with the broad idea of nature being computable, right? But, but all right, but,
link |
00:38:54.320
but supposing that he's right about that, then, you know, what most physicists would say is that
link |
00:39:00.720
whatever new phenomena there are in quantum gravity, you know, they might be relevant at the
link |
00:39:06.960
singularities of black holes, they might be relevant at the Big Bang. They are plainly not
link |
00:39:14.320
relevant to something like the brain, you know, that is operating at ordinary temperatures,
link |
00:39:20.240
you know, with ordinary chemistry, and, you know, the, the, the physics underlying the brain,
link |
00:39:27.600
they would say that we have, you know, the fundamental physics of the brain, they would
link |
00:39:31.360
say that we've pretty much completely known for, for generations now, right? Because,
link |
00:39:37.520
you know, quantum field theory lets us sort of parameterize our ignorance, right? I mean,
link |
00:39:42.800
Sean Carroll has made this case and, you know, in great detail, right? That sort of whatever
link |
00:39:47.920
new effects are coming from quantum gravity, you know, they are sort of screened off by
link |
00:39:52.880
quantum field theory, right? And this is, this brings, you know, brings us to the whole idea of
link |
00:39:57.600
effective theories, right? But that, like, we have, you know, in, like, in the standard model of
link |
00:40:02.960
elementary particles, right? We have a quantum field theory that seems totally adequate for all
link |
00:40:10.320
of the terrestrial phenomena, right? The only things that it doesn't, you know, explain are,
link |
00:40:15.680
well, first of all, you know, the details of gravity, if you were to probe it, like, at,
link |
00:40:20.560
at, you know, extremes of, you know, curvature or at, like, incredibly small distances,
link |
00:40:26.640
it doesn't explain dark matter. It doesn't explain black hole singularities, right? But these are all
link |
00:40:32.400
very exotic things, very, you know, far removed from our life on Earth, right? So for Penrose,
link |
00:40:38.320
to be right, he needs, you know, these phenomena to somehow affect the brain. He needs the brain
link |
00:40:44.560
to contain antenna that are sensitive to this to this as yet unknown physics, right? And then he
link |
00:40:52.000
needs a modification of quantum mechanics. Okay, so he needs quantum mechanics to actually be wrong.
link |
00:40:58.800
Okay, he needs what he wants is what he calls an objective reduction mechanism or an objective
link |
00:41:06.000
collapse. So this is the idea that once quantum states get large enough, then they somehow
link |
00:41:11.680
spontaneously collapse, right? That, you know, and this is an idea that lots of people have explored.
link |
00:41:21.440
You know, there's something called the GRW proposal that tries to, you know, say something along
link |
00:41:28.160
those lines, you know, and these are theories that actually make testable predictions, right,
link |
00:41:32.160
which is a nice feature that they have. But, you know, the very fact that they're testable may mean
link |
00:41:36.640
that in the, you know, in the coming decades, we may well be able to test these theories and show
link |
00:41:42.240
that they're wrong, right? You know, we may be able to test some of Penrose's ideas. If not,
link |
00:41:48.480
not his ideas about consciousness, but at least his ideas that about an objective collapse of
link |
00:41:53.920
quantum states, right? And people have actually, like Dick Baumeister have actually been working
link |
00:41:58.960
to try to do these experiments. They haven't been able to do it yet to test Penrose's proposal.
link |
00:42:04.560
Okay, but Penrose would need more than just an objective collapse of quantum states,
link |
00:42:09.680
which would already be the biggest development in physics for a century since quantum mechanics
link |
00:42:14.560
itself. Okay, he would need for consciousness to somehow be able to influence the direction
link |
00:42:21.760
of the collapse so that it wouldn't be completely random, but that, you know, your dispositions
link |
00:42:27.040
would somehow influence the quantum state to collapse more likely this way or that way.
link |
00:42:32.560
Okay, finally, Penrose, you know, says that all of this has to be true because of an argument
link |
00:42:39.360
that he makes based on Gertl's incompleteness theorem. Okay, now, Blake, I would say the
link |
00:42:44.880
overwhelming majority of computer scientists and mathematicians who have thought about this,
link |
00:42:51.200
I don't think that Gertl's incompleteness theorem can do what he needs it to do here,
link |
00:42:55.280
right? I don't think that that argument is sound. Okay, but that is, you know,
link |
00:43:00.000
that is sort of the tower that you have to ascend to if you're going to go where Penrose goes.
link |
00:43:04.640
And the intuition uses with the incompleteness theorem is that basically
link |
00:43:09.440
that there's important stuff that's not computable? No, it's not just that because,
link |
00:43:14.720
I mean, everyone agrees that there are problems that are uncomputable, right? That's a mathematical
link |
00:43:19.360
theorem, right? But what Penrose wants to say is that, you know, for example, there are statements,
link |
00:43:28.400
you know, for, you know, given any formal system, you know, for doing math, right,
link |
00:43:34.160
there will be true statements of arithmetic that that formal system, you know, if it's
link |
00:43:39.600
adequate for math at all, if it's consistent and so on, will not be able to prove a famous example
link |
00:43:46.000
being the statement that that system itself is consistent, right? No, you know, good formal system
link |
00:43:52.160
can actually prove its own consistency to that can only be done from a stronger formal system,
link |
00:43:58.080
which then can't prove its own consistency and so on forever. Okay, that's Gertl's theorem.
link |
00:44:03.600
But now, why is that relevant to consciousness, right? Well, you know, I mean, I mean, the idea
link |
00:44:11.120
that it might have something to do with consciousness as an old one, Gertl himself,
link |
00:44:15.280
apparently thought that it, you know, a Lucas thought so, I think, in the 60s. And Penrose
link |
00:44:24.240
is really just, you know, sort of updating what they and others had said. I mean, you know, the
link |
00:44:30.160
idea that Gertl's theorem could have something to do with consciousness was, you know, in 1950,
link |
00:44:36.240
when Alan Turing wrote his article about the Turing test, he already, you know, was writing
link |
00:44:42.160
about that as like an old and well known idea. And as one that he, as a wrong one that he wanted
link |
00:44:47.680
to dispense with. Okay, but the basic problem with this idea is, you know, Penrose wants to say
link |
00:44:54.400
that and all of his predecessors, you know, want to say that, you know, even though, you know,
link |
00:45:00.480
this given formal system cannot prove its own consistency, we as humans sort of looking at
link |
00:45:07.520
it from the outside can just somehow see its consistency. Right. And the, you know, the rejoinder
link |
00:45:15.280
to that, you know, from the very beginning has been, well, can we really? I mean, maybe, maybe,
link |
00:45:20.960
you know, maybe, maybe he Penrose can, but, you know, can the rest of us? Right. And, you know,
link |
00:45:27.600
I noticed that, that, you know, I mean, it is perfectly plausible to imagine a computer that
link |
00:45:35.440
could say, you know, it would not be limited to working within a single formal system. Right.
link |
00:45:40.720
They could say, I am now going to adopt the hypothesis that this, that my formal system
link |
00:45:45.920
is consistent. Right. And I'm now going to see what can be done from that stronger vantage point
link |
00:45:50.720
and, and so on. And, you know, and I'm going to add new axioms to my system. Totally plausible.
link |
00:45:56.480
There's absolutely, Gertl's theorem has nothing to say about against an AI that could repeatedly
link |
00:46:02.800
add new axioms. All it says is that there is no absolute guarantee that when the AI adds new
link |
00:46:09.920
axioms that it will always be right. Right. Okay. And, you know, and that's of course the point
link |
00:46:13.920
that Penrose pounces on, but the reply is obvious. And, you know, it's one that Alan Turing made
link |
00:46:19.440
70 years ago. Namely, we don't have an absolute guarantee that we're right when we add a new
link |
00:46:24.160
axiom. Right. We never have. And plausibly, we never will. So on Alan Turing, you took part in
link |
00:46:30.800
the Lubna Prize. Not really. No, I didn't. I mean, there was this kind of ridiculous claim that was
link |
00:46:39.280
made some almost a decade ago about a chat bot called Eugene Goosman. I guess you didn't participate
link |
00:46:47.360
as a judge in the Lubna Prize, but you participated as a judge in that. I guess it was an exhibition
link |
00:46:52.880
event or something like that. Or with Eugene, Eugene Goosman, that was just me writing a blog post
link |
00:46:59.440
because some journalists called me to ask about it. Did you ever chat with him? I did chat with
link |
00:47:03.920
Eugene Goosman. I mean, it was available on the web. The chat. Oh, interesting. I didn't know.
link |
00:47:07.600
So yeah. So all that happened was that a bunch of journalists started writing
link |
00:47:13.280
breathless articles about a first chat bot that passes the Turing test. And it was this thing
link |
00:47:19.840
called Eugene Goosman that was supposed to simulate a 13 year old boy. And apparently,
link |
00:47:26.960
someone had done some test where people were less than perfect, let's say, distinguishing
link |
00:47:34.480
it from a human. And they said, well, if you look at Turing's paper and you look at the percentages
link |
00:47:40.960
that he talked about, then it seemed like we're past that threshold. And I had a different way
link |
00:47:49.760
to look at it instead of the legalistic way. Let's just try the actual thing out and let's
link |
00:47:55.360
see what it can do with questions like, is Mount Everest bigger than a shoebox? Or just the most
link |
00:48:03.360
obvious questions, right? And then, and you know, and the answer is, well, it just kind of parries
link |
00:48:08.800
you because it doesn't know what you're talking about, right? So just clarify exactly in which
link |
00:48:13.680
way they're obvious. They're obvious in the sense that you convert the sentences into the meaning
link |
00:48:20.400
of the objects they represent and then do some basic obvious. We mean your common sense reasoning
link |
00:48:26.720
with the objects that the sentences represent. Right, right. It was not able to answer, you know,
link |
00:48:31.760
or even intelligently respond to basic common sense questions. Well, let me say something
link |
00:48:36.400
stronger than that. There was a famous chatbot in the 60s called Eliza, right, that, you know,
link |
00:48:42.160
that managed to actually fool, you know, a lot of people, right? Or people would pour their
link |
00:48:47.520
hearts out into this Eliza because it simulated a therapist, right? And most of what it would do
link |
00:48:53.360
was it would just throw back at you whatever you said, right? And this turned out to be incredibly
link |
00:48:58.560
effective, right? Maybe, you know, therapists know this, this is, you know, one of their tricks.
link |
00:49:04.640
But it, you know, it really had some people convinced. But, you know, this thing was just
link |
00:49:12.640
like, I think it was literally just a few hundred lines of Lisp code, right? It was not only was
link |
00:49:19.120
it not intelligent, it wasn't especially sophisticated. It was like a, it was a simple little hobbyist
link |
00:49:24.880
program. And Eugene Goestman from what I could see was not a significant advance compared to
link |
00:49:31.360
Eliza, right? So, so, and that was, that was really the point I was making. And this was,
link |
00:49:37.840
you know, you didn't, in some sense, you didn't need a, like a computer science professor to
link |
00:49:43.200
sort of say this, like anyone who was looking at it and who just had, you know, an ounce of sense
link |
00:49:49.280
could have said the same thing, right? But because, you know, these journalists were, you know, calling
link |
00:49:55.120
me, you know, like the first thing I said was, well, you know, no, you know, I'm a quantum
link |
00:50:00.240
computing person. I'm not an AI person, you know, you shouldn't ask me. Then they said, look, you
link |
00:50:05.040
can go here and you can try it out. I said, all right, all right, so I'll try it out. But now,
link |
00:50:10.400
you know, did this whole discussion, I mean, it got a whole lot more interesting in just the last
link |
00:50:15.040
few months. Yeah, I'd love to hear your thoughts about GPT3. Yeah, in the last few months,
link |
00:50:20.640
we've had, you know, we've, we've, the world has now seen a chat engine or a text engine,
link |
00:50:27.040
I should say, called GPT3. That, you know, I think it's still, you know, it does not pass a
link |
00:50:34.160
Turing test. You know, there are no real claims that it passes the Turing test, right? You know,
link |
00:50:39.600
this is, comes out of the group at OpenAI and, you know, they're, you know, they've been relatively
link |
00:50:44.560
careful in what they've claimed about the system. But I think this, this, this, as clearly as Eugene
link |
00:50:52.720
Goestman was not in advance over Eliza, it is equally clear that this is a major advance over,
link |
00:50:58.960
over Eliza or really over anything that the world has seen before. This is a text engine
link |
00:51:05.680
that can come up with kind of on topic, you know, reasonable sounding completions to just about
link |
00:51:13.120
anything that you ask. You can ask it to write a poem about topic X in the style of Poet Y,
link |
00:51:20.960
and it will have a go at that. And it will do, you know, not a perfect, not a great job,
link |
00:51:26.320
not an amazing job, but, you know, a passable job, you know, definitely, you know, as, as good as,
link |
00:51:32.240
you know, you know, in many cases, I would say better than I would have done, right?
link |
00:51:36.720
You know, you can ask it to write, you know, an essay, like a student essay about pretty much
link |
00:51:42.640
any topic, and it will get something that I am pretty sure would get at least a B minus, you
link |
00:51:47.760
know, in most, you know, high school or even college classes, right? And, you know, in some sense,
link |
00:51:53.440
you know, the way that it did this, the way that it achieves this, you know, Scott Alexander of the,
link |
00:51:59.280
you know, the much more in the blog Slate Star Codex had a wonderful way of putting it. He said
link |
00:52:05.280
that they basically just ground up the entire internet into a slurry, okay? And, you know,
link |
00:52:12.240
to tell you the truth, I had wondered for a while why nobody had tried that, right? Like,
link |
00:52:17.040
why not write a chatbot by just doing deep learning over a corpus consisting of the entire web,
link |
00:52:24.560
right? And, and so, so, so now they finally have done that, right? And, you know, the results are,
link |
00:52:31.520
are very impressive. You know, it's not clear that, you know, people can argue about whether this is
link |
00:52:37.120
truly a step toward general AI or not. But this is an amazing capability that, you know, we didn't
link |
00:52:45.440
have a few years ago, that, you know, if a few years ago, if you had told me that we would have it
link |
00:52:51.120
now, that would have surprised me. Yeah. And I think that anyone who denies that is just not
link |
00:52:55.840
engaging with, with what's there. So their model, it takes a large part of the internet and compresses
link |
00:53:02.720
it in a small number of parameters relative to the size of the internet, and is able to, without
link |
00:53:10.480
fine tuning, do a basic kind of a querying mechanism, just like you described, where you
link |
00:53:16.880
specify a kind of poet and then you want to write a poem. And it somehow is able to do basically a
link |
00:53:21.520
lookup on the internet of relevant things. I mean, that's what it, I mean, I mean, how else do you
link |
00:53:26.960
explain it? Well, okay. I mean, I mean, the training involved, you know, massive amounts of data from
link |
00:53:32.320
the internet and actually took lots and lots of computer power, lots of electricity, right? You
link |
00:53:37.840
know, there are some, some very prosaic reasons why this wasn't done earlier, right? But, you know,
link |
00:53:43.920
it costs some tens of millions of dollars, I think, you know, that's just for, but approximate, like
link |
00:53:48.640
a few million dollars. Oh, okay, okay. Oh, really? Okay. It's more like four or five. Oh, all right.
link |
00:53:54.160
All right. Thank you. I mean, as they, as they scale it up, you know, it will cost, but then the
link |
00:53:58.880
hope is cost comes down and all that kind of stuff. But basically, you know, it is a neural net, you
link |
00:54:06.400
know, so I mean, I mean, or what's now called a deep net, but, you know, they're basically the
link |
00:54:10.320
same thing, right? So it's a, it's a form of, you know, algorithm that people have known about for
link |
00:54:16.400
decades, right? But it is constantly trying to solve the problem, predict the next word, right?
link |
00:54:24.080
So it's just trying to predict what comes next. It's not trying to decide what, what it should
link |
00:54:32.160
say, what ought to be true. It's trying to predict what someone who had said all of the words up to
link |
00:54:38.320
the preceding one would say next. Although to push back on that, that's how it's trained.
link |
00:54:43.360
That's right. No, of course. But it's arguable. Yeah.
link |
00:54:46.640
That our very cognition could be a mechanism as that simple.
link |
00:54:50.560
Oh, of course. Of course. I never said that it wasn't.
link |
00:54:54.480
But yeah. I mean, I mean, in some sense, that, that is, you know, if there is a deep
link |
00:54:58.880
philosophical question that's raised by GPT three, then that is it, right? Are we doing anything
link |
00:55:04.240
other than, you know, this, this predictive processing, just trying to constantly trying
link |
00:55:09.040
to fill in a blank of what would come next after what we just said up to this point?
link |
00:55:14.160
Is that what I'm doing right now? It's impossible. So the, the intuition that
link |
00:55:19.040
a lot of people have, well, look, this thing is not going to be able to reason the mountain Everest
link |
00:55:24.000
question. Do you think it's possible that GPT five, six and seven would be able to,
link |
00:55:29.760
with this exact same process, begin to look, do something that looks like is indistinguishable
link |
00:55:37.120
to us humans from reasoning? I mean, the truth is that we don't really know what the limits are,
link |
00:55:42.720
right? Because, because, you know, what we've seen so far is that, you know, GPT three was
link |
00:55:48.480
basically the same thing as GPT two, but just with, you know, a much larger network, you know,
link |
00:55:56.000
more training time, bigger training corpus, right? And it was, you know, very noticeably better,
link |
00:56:03.280
right? Then it's immediate predecessor. So we, you know, we don't know where you hit the ceiling
link |
00:56:08.960
here, right? I mean, that's the, that's the amazing part and maybe also the scary part,
link |
00:56:13.680
right? That, you know, now my guess would be that, that, you know, at some point, like there has to
link |
00:56:19.120
be diminishing returns, like it can't be that simple, can it? Right? Right? Well, I wish that I had
link |
00:56:25.840
more to base that guess on. Right. Yeah. I mean, some people say that there would be a limitation
link |
00:56:30.800
on the, we're going to hit a limit on the amount of data that's on the internet. Yes. Yeah. So,
link |
00:56:36.000
so sure. So there's certainly that limit. I mean, there's also, you know, like if you are looking
link |
00:56:42.720
for questions that will stump GPT three, right, you can come up with some without, you know, like,
link |
00:56:48.080
you know, even getting it to learn how to balance parentheses, right? Like it can, you know, it
link |
00:56:54.000
doesn't do such a great job, right? You know, like, like, you know, and, you know, and its failures are
link |
00:57:00.880
ironic, right? Like, like basic arithmetic, right? And you think, and you think, you know,
link |
00:57:05.440
isn't that what computers are supposed to be best at? Yeah. Isn't that where computers already had
link |
00:57:09.760
us beat a century ago? Yeah. Right. And, you know, and yet that's where GPT three struggles, right?
link |
00:57:15.120
But it's, it's amazing, you know, that it's almost like a young child in that way, right?
link |
00:57:19.920
That, but, but somehow, you know, because it is just trying to predict what, what comes next,
link |
00:57:28.480
it doesn't know when it should stop doing that and start doing something very different,
link |
00:57:33.200
like some more exact logical reasoning, right? And so, so, you know, the, you know, one is naturally
link |
00:57:41.280
led to guess that our brain sort of has some element of predictive processing, but that it's
link |
00:57:47.440
coupled to other mechanisms, right? That it's coupled to, you know, first of all, visual reasoning,
link |
00:57:52.480
which GPT three also doesn't have any of, right? Although there's some demonstration that there's
link |
00:57:57.120
a lot of promise there. Oh yeah, it can complete images. That's right. And using exact same kind
link |
00:58:02.480
of transformer mechanisms to like watch videos on YouTube. And so the same, the same self supervised
link |
00:58:09.760
mechanism to be able to look, it'd be fascinating to think what kind of completions you could do.
link |
00:58:14.240
Oh yeah, no, absolutely. Although like if we ask it to like, you know, a word problem that
link |
00:58:19.120
involve reasoning about the locations of things in space, I don't think it does such a great job
link |
00:58:23.920
on those, right? To take an example. And so, so the guess would be, well, you know, humans have a
link |
00:58:29.600
lot of predictive processing, a lot of just filling in the blanks, but we also have these other
link |
00:58:33.920
mechanisms that we can couple to, or that we can sort of call as subroutines when we need to.
link |
00:58:39.760
And that maybe, maybe, you know, to go further that one would want to integrate other forms of
link |
00:58:45.120
reasoning. Let me go on another topic that is amazing, which is complexity. What,
link |
00:58:55.840
and then start with the most absurdly romantic question of what's the most beautiful idea in
link |
00:59:00.560
computer science or theoretical computer science to you? Like what just early on in your life,
link |
00:59:05.760
or in general, have captivated you and just grabbed you? I think I'm going to have to go with the
link |
00:59:10.240
idea of universality. You know, if you're really asking for the most beautiful, I mean, so universality
link |
00:59:18.880
is the idea that, you know, you put together a few simple operations, like in the case of
link |
00:59:25.440
Boolean logic, that might be the AND gate, the OR gate, the NOT gate, right? And then your first
link |
00:59:31.120
guess is, okay, this is a good start. But obviously, as I want to do more complicated things, I'm going
link |
00:59:37.360
to need more complicated building blocks to express that, right? And that was actually my guess when
link |
00:59:43.120
I first learned what programming was. I mean, when I was, you know, an adolescent and someone showed
link |
00:59:48.560
me AppleBasic and, you know, GWBasic. If anyone listening remembers that. Okay, but, you know,
link |
00:59:57.920
I thought, okay, well, now, you know, I mean, I thought I felt like this is a revelation, you
link |
01:00:03.600
know, it's like finding out where babies come from. It's like that level of, you know, why didn't
link |
01:00:08.000
anyone tell me this before, right? But I thought, okay, this is just the beginning. Now I know how
link |
01:00:12.800
to write a basic program. But, you know, really write an interesting program, like, you know,
link |
01:00:18.640
a video game, which had always been my dream as a kid to, you know, create my own Nintendo games,
link |
01:00:24.320
right? But, you know, obviously, I'm going to need to learn some way more complicated form
link |
01:00:29.600
of programming than that. Okay, but, you know, eventually I learned this incredible idea of
link |
01:00:35.600
universality. And that says that, no, you throw in a few rules, and then you can, you already have
link |
01:00:42.400
enough to express everything. Okay, so for example, the and, the or, and the not gate can all, or in
link |
01:00:49.280
fact, even just the and and the not gate, or even just even just the NAND gate, for example, is
link |
01:00:55.280
already enough to express any Boolean function on any number of bits, you just have to string
link |
01:01:00.480
together enough of them. You can build the universe with NAND gates, you can build the
link |
01:01:04.080
universe out of NAND gates. Yeah, you know, the the simple instructions of basic are already enough,
link |
01:01:11.120
at least in principle, you know, if we ignore details like how much memory can be accessed
link |
01:01:16.560
and stuff like that, that is enough to express what could be expressed by any programming language
link |
01:01:21.760
whatsoever. And the way to prove that is very simple. We simply need to show that in basic,
link |
01:01:27.680
or whatever, we could write a an interpreter or a compiler for whatever is other programming
link |
01:01:33.760
language we care about, like C or Java or whatever. And as soon as we had done that,
link |
01:01:39.200
then ipso facto, anything that's expressible in C or Java is also expressible in basic.
link |
01:01:45.680
Okay, and so this idea of universality, you know, goes back at least to Alan Turing in the 1930s,
link |
01:01:54.000
when, you know, he wrote down this incredibly simple paired down model of a computer,
link |
01:02:01.360
the Turing machine, right, which, you know, he paired down the instruction set to just
link |
01:02:06.960
read a symbol, you know, go right a symbol, move to the left, move to the right, halt,
link |
01:02:13.920
change your internal state, right, that's it. Okay, and anybody proved that, you know,
link |
01:02:20.560
this could simulate all kinds of other things, you know, and so, so in fact, today we would say,
link |
01:02:26.880
well, we would call it a Turing universal model of computation, that is, you know, just as it
link |
01:02:32.720
has just the same expressive power that basic or Java or C plus plus or any of those other languages
link |
01:02:40.880
have, because anything in those other languages could be compiled down to Turing machine. Now,
link |
01:02:47.200
Turing also proved a different related thing, which is that there is a single Turing machine
link |
01:02:53.360
that can simulate any other Turing machine, if you just describe that other machine on its tape,
link |
01:03:01.120
right, and likewise, there is a single Turing machine that will run any C program, you know,
link |
01:03:06.720
if you just put it on its tape, that's a second meaning of universality. First of all,
link |
01:03:12.160
that he couldn't visualize it, and that was in the 30s, I think. Yeah, the 30s, that's right.
link |
01:03:15.600
That's before computers really, I mean, I don't know how, I wonder what that felt like,
link |
01:03:24.480
you know, learning that there's no Santa Claus or something, because I don't know if that's
link |
01:03:30.000
empowering or paralyzing, because it doesn't give you any, it's like, you can't write a
link |
01:03:36.720
software engineering book and make that the first chapter and say we're done.
link |
01:03:41.040
Well, I mean, I mean, right, I mean, in one sense, it was this enormous flattening of the
link |
01:03:46.080
universe, right? I had imagined that there was going to be some infinite hierarchy of more and
link |
01:03:51.920
more powerful programming languages, you know, and then I kicked myself for, you know, for having
link |
01:03:56.800
such a stupid idea. But apparently, Girdle had had the same conjecture in the 30s. Oh, good.
link |
01:04:01.840
You're in a good company. Well, you know, and then Girdle read Turing's paper,
link |
01:04:07.840
and he kicked himself, and he said, yeah, I was completely wrong about that. Okay,
link |
01:04:11.680
but you know, I had thought that maybe where I can contribute will be to invent a new,
link |
01:04:17.760
more powerful programming language that lets you express things that could never be expressed in
link |
01:04:22.800
basic, right? And, you know, and, you know, how would you do that? Obviously, you couldn't do it
link |
01:04:27.200
itself in basic, right? But, but, you know, there is this incredible flattening that happens once
link |
01:04:33.360
you learn what is universality. But then it's also like an opportunity, because it means once you
link |
01:04:41.200
know these rules, then, you know, the sky is the limit, right? Then you have kind of the same weapons
link |
01:04:48.000
at your disposal that the world's greatest programmer has. It's now all just a question of
link |
01:04:53.200
how you wield them, right? Exactly. But so every problem is solvable, but some problems are harder
link |
01:05:00.000
than others. And well, yeah, there's the question of how much time, you know, of how hard is it to
link |
01:05:05.760
write a program. And then there's also the questions of what resources does the program need? You
link |
01:05:11.200
know, how much time, how much memory, those are much more complicated questions, of course,
link |
01:05:15.360
ones that we're still struggling with today. Exactly. So you've, I don't know if you created
link |
01:05:19.760
complexity zoo or... I did create the complexity zoo. What is it? What's complexity? Oh, all right,
link |
01:05:25.600
all right, all right. Complexity theory is the study of sort of the inherent resources needed
link |
01:05:31.680
to solve computational problems. Okay, so it's easiest to give an example. Like, let's say we
link |
01:05:40.080
want to add two numbers, right? If I want to add them, you know, if the numbers are twice as long,
link |
01:05:48.080
then it only, it will take me twice as long to add them, but only twice as long, right? It's
link |
01:05:53.200
no worse than that. Or a computer. For a computer or for a person, we're using pencil and paper for
link |
01:05:58.720
that matter. If you have a good algorithm. Yeah, that's right. I mean, even if you just use the
link |
01:06:03.120
elementary school algorithm of just carrying, you know, then it takes time that is linear
link |
01:06:08.480
in the length of the numbers, right? Now, multiplication, if you use the elementary school
link |
01:06:13.360
algorithm is harder because you have to multiply each digit of the first number by each digit of
link |
01:06:19.680
the second one. Yeah, and then deal with all the carries. So that's what we call a quadratic time
link |
01:06:25.040
algorithm, right? If the numbers become twice as long, now you need four times as much time.
link |
01:06:31.280
Okay. So now as it turns out, we people discovered much faster ways to multiply numbers using
link |
01:06:40.480
computers. And today we know how to multiply two numbers that are n digits long using a number
link |
01:06:47.520
of steps that's nearly linear in it. These are questions you can ask. But now let's think about
link |
01:06:52.880
a different thing that people, you know, even countered in elementary school of factoring a
link |
01:06:58.160
number. Okay, take a number and find its prime factors, right? And here, you know, if I give
link |
01:07:04.320
you a number with 10 digits, I ask you for its prime factors. Well, maybe it's even so you know
link |
01:07:10.400
that two is a factor, you know, maybe it ends in zero. So you know that 10 is a factor, right?
link |
01:07:15.680
But, you know, other than a few obvious things like that, you know, if the prime factors are all
link |
01:07:20.880
very large, then it's not clear how you even get started, right? You know, you, it seems like you
link |
01:07:26.240
have to do an exhaustive search among an enormous number of factors. Now, and as many people might
link |
01:07:33.520
know, the for better or worse, the security, you know, of most of the encryption that we currently
link |
01:07:41.360
use to protect the internet is based on the belief, and this is not a theorem, it's a belief
link |
01:07:47.760
that factoring is an inherently hard problem for our computers. We do know algorithms that are
link |
01:07:55.280
better than just trial division and just trying all the possible divisors, but they are still
link |
01:08:02.000
basically exponential. And exponential is hard. Yeah, exactly. So the fastest algorithms that
link |
01:08:09.040
anyone has discovered, at least publicly discovered, you know, I'm assuming that the NSA doesn't know
link |
01:08:14.400
something better. Yeah. Okay, but they take time that basically grows exponentially with the cube
link |
01:08:20.800
root of the size of the number that you're factoring, right? So that cube root, that's the part that
link |
01:08:26.320
takes all the cleverness, okay, but there's still an exponential, there's still an exponentiality
link |
01:08:31.120
there. But what that means is that like, when people use a thousand bit keys for their cryptography,
link |
01:08:37.440
that can probably be broken using the resources of the NSA or the world's other intelligence
link |
01:08:42.800
agencies, you know, people have done analyses that say, you know, with a few hundred million
link |
01:08:47.600
dollars of computer power, they could totally do this. And if you look at the documents that Snowden
link |
01:08:53.120
released, you know, it looks a lot like they are doing that or something like that, it would kind
link |
01:08:59.360
of be surprising if they weren't. Okay, but, you know, if that's true, then in some ways,
link |
01:09:05.200
that's reassuring, because if that's the best that they can do, then that would say that they
link |
01:09:09.360
can't break 2000 bit numbers, right? Right, exactly. Right, then 2000 bit numbers would be beyond
link |
01:09:16.000
what even they could do. They haven't found an efficient algorithm. That's where all the
link |
01:09:19.680
worries and the concerns of quantum computing came in that there could be some kind of shortcut
link |
01:09:23.760
around that. Right, so complexity theory is a, you know, is a huge part of, let's say, the
link |
01:09:30.240
theoretical core of computer science. You know, it started in the 60s and 70s as, you know, sort of an,
link |
01:09:37.680
you know, autonomous field. So it was, you know, already, you know, I mean, you know, it was well
link |
01:09:43.200
developed even by the time that I was born. But I, in 2002, I made a website called the
link |
01:09:51.600
complexity zoo to answer your question, where I just tried to catalog the different complexity
link |
01:09:58.640
classes, which are classes of problems that are solvable with different kinds of resources.
link |
01:10:04.240
Okay, so these are kind of, you know, you could think of complexity classes as like being almost
link |
01:10:10.560
to theoretical computer science, like what the elements are to chemistry, right? They're sort
link |
01:10:16.400
of, you know, there are our most basic objects in a certain way. I feel like the elements have
link |
01:10:25.120
have a characteristic to them where you can't just add an infinite number. Well, you could,
link |
01:10:30.160
but beyond a certain point, they become unstable. Right, right. So it's like, you know, in theory,
link |
01:10:36.000
you can have atoms with, you know, and look, look, I mean, I mean, a neutron star, you know,
link |
01:10:40.800
is a nucleus with, you know, uncalled billions of, of, of, of, of, of, of, of, of neutrons in it,
link |
01:10:48.800
of, of hadrons in it. Okay, but, you know, for, for sort of normal atoms, right, probably you
link |
01:10:55.600
can't get much above 100, you know, atomic weight, 150 or so, or sorry, sorry, I mean, I mean,
link |
01:11:02.320
beyond 150 or so protons without it, you know, very quickly fissioning with complexity classes.
link |
01:11:08.880
Well, yeah, you, you can have an infinity of complexity classes. But, you know, maybe there's
link |
01:11:14.480
only a finite number of them that are particularly interesting, right? Just like with anything else,
link |
01:11:19.600
you know, you, you care about some more than about others. So what kind of interesting classes are
link |
01:11:24.800
there? You can have just maybe say, what are the, if you taking a kind of computer science class,
link |
01:11:31.040
what are the classes you learn? Good. Let me, let me tell you sort of the, the, the biggest ones,
link |
01:11:36.400
the ones that you would learn first. So, you know, first of all, there is P, that's what it's called.
link |
01:11:41.840
Okay, it stands for polynomial time. And this is just the class of all of the problems that you
link |
01:11:47.840
could solve with a conventional computer, like your iPhone or your laptop, you know, by a completely
link |
01:11:55.120
deterministic algorithm, right? Using a number of steps that grows only like the size of the input
link |
01:12:03.200
raised to some fixed power. Okay, so if your algorithm is linear time, like, you know, for
link |
01:12:10.000
adding numbers, okay, that, that problem is in P. If you have an algorithm that's quadratic time,
link |
01:12:16.240
like the elementary school algorithm for multiplying two numbers, that's also in P.
link |
01:12:20.800
Even if it was the size of the input to the tenth power or to the fiftieth power, well,
link |
01:12:26.400
that wouldn't be very good in practice. But, you know, formally, we would still count that,
link |
01:12:31.280
that would still be in P. Okay, but if your algorithm takes exponential time, meaning,
link |
01:12:37.120
like, if every time I add one more data point to your input, if the time needed by the algorithm
link |
01:12:46.320
doubles, if you need time like two to the power of the amount of input data, then that is that we
link |
01:12:53.520
call an exponential time algorithm. Okay, and that is not polynomial. Okay, so P is all of the problems
link |
01:13:00.960
that have some polynomial time algorithm. Okay, so that includes most of what we do with our
link |
01:13:07.200
computers on a day to day basis, you know, all the, you know, sorting basic arithmetic, you know,
link |
01:13:13.200
whatever is going on in your email reader or in Angry Birds. Okay, it's all in P. Then the next
link |
01:13:20.400
super important class is called NP. That stands for non deterministic polynomial. Okay, does not
link |
01:13:27.680
stand for not polynomial, which is a common confusion. But NP was basically all of the problems
link |
01:13:35.440
where if there is a solution, then it is easy to check the solution if someone shows it to you.
link |
01:13:41.920
Okay, so actually a perfect example of a problem in NP is factoring, the one I told you about before.
link |
01:13:49.600
Like, if I gave you a number with thousands of digits, and I told you that it, you know, I asked
link |
01:13:56.240
you, does this, does this have at least three non trivial divisors? Right, that might be a super
link |
01:14:04.400
hard problem to solve, right, might take you millions of years using any algorithm that's
link |
01:14:09.200
known, at least running on our existing computers. Okay, but if I simply showed you the divisors,
link |
01:14:15.520
I said, here are three divisors of this number, then it would be very easy for you to ask your
link |
01:14:21.360
computer to just check each one and see if it works, just divide it in, see if there's any remainder.
link |
01:14:27.520
Right, and if they all go in, then you've checked, well, I guess there were. Right, so any problem
link |
01:14:35.040
where, you know, wherever there's a solution, there is a short witness that can be easily,
link |
01:14:40.560
like a polynomial size witness that can be checked in polynomial time, that we call an NP
link |
01:14:47.280
problem. Okay, and yeah, so every problem that's in P is also an NP, right, because, you know,
link |
01:14:55.120
you could always just ignore the witness and just, you know, if a problem is in P, you can just solve
link |
01:14:59.440
it yourself. Right. Okay, but now the, in terms of the central, you know, mystery of theoretical
link |
01:15:06.080
computer science is every NP problem in P. So if you can easily check the answer to a computational
link |
01:15:14.640
problem, does that mean that you can also easily find the answer? Even though there's all these
link |
01:15:19.360
problems that appear to be very difficult to find the answer, it's still an open question whether
link |
01:15:25.120
a good answer exists. So what's your... Because no one has proven that there's no way to do it.
link |
01:15:29.760
It's arguably the most, I don't know, the most famous, the most maybe interesting,
link |
01:15:36.640
maybe you disagree with that problem in theoretical computer science. So what's your...
link |
01:15:40.000
The most famous, for sure. P equals NP. Yeah. If you were to bet all your money,
link |
01:15:44.480
where do you put your money? That's an easy one. P is not equal to NP. Okay, so... I like to say
link |
01:15:48.800
that if we were physicists, we would have just declared that to be a law of nature, you know,
link |
01:15:52.800
just like thermodynamics. That's hilarious. Given ourselves Nobel prizes for its discovery. Yeah,
link |
01:15:58.560
yeah, no one. Look, if later, if later it turned out that we were wrong, we just give ourselves
link |
01:16:02.720
another Nobel Prize. More Nobel Prizes, yeah. I mean, you know, but yeah, because we're...
link |
01:16:08.160
So harsh, but so true. I mean, no, I mean, I mean, it's really just because we are mathematicians
link |
01:16:14.000
or descended from mathematicians, you know, we have to call things conjectures that other people
link |
01:16:19.760
would just call empirical facts or discoveries, right? But one shouldn't read more into that
link |
01:16:24.960
difference in language, you know, about the underlying truth. So, okay, so you're a good
link |
01:16:29.920
investor and good spender of money. So then let me ask another way. Is it possible at all?
link |
01:16:37.520
And what would that look like if P indeed equals NP? Well, I do think that it's possible. I mean,
link |
01:16:43.680
in fact, you know, when people really pressed me on my blog for what odds would I put,
link |
01:16:47.920
I put, you know, two or 3% odds that P equals NP. Yeah, just because, you know, when P... I mean,
link |
01:16:56.000
I mean, you really have to think about like, if there were 50, you know, mysteries like P versus
link |
01:17:02.480
NP, and if I made a guess about every single one of them, would I expect to be right 50 times?
link |
01:17:07.920
Right. And the truthful answer is no. Okay. Yeah. So, you know, and that's what you really mean
link |
01:17:14.160
in saying that, you know, you have, you know, better than 98% odds for something. Okay. But
link |
01:17:20.960
so, so yeah, you know, I mean, there could certainly be surprises. And look, if P equals NP,
link |
01:17:26.480
well, then there would be the further question of, you know, is the algorithm actually efficient
link |
01:17:31.600
in practice? Right. I mean, Don Knuth, who I know that you've interviewed as well, right? He
link |
01:17:38.000
likes to conjecture that P equals NP, but that the algorithm is so inefficient that it doesn't
link |
01:17:43.680
matter anyway, right? No, I don't know. I've listened to him say that. I don't know whether he says
link |
01:17:49.440
that just because he has an actual reason for thinking it's true or just because it sounds cool.
link |
01:17:54.400
Yeah. Okay. But, you know, that's a logical possibility, right? That the algorithm could
link |
01:18:00.160
be n to the 10,000 time, or it could even just be n squared time, but with a leading constant of
link |
01:18:06.640
it could be a Google times n squared, or something like that. And in that case,
link |
01:18:10.800
the fact that P equals NP, well, it would, you know, ravage the whole theory of complexity.
link |
01:18:18.160
We would have to, you know, rebuild from the ground up. But in practical terms, it might
link |
01:18:22.560
mean very little, right? If the algorithm was too inefficient to run. If the algorithm could
link |
01:18:29.120
actually be run in practice, like if it had small enough constants, you know, or if you could improve
link |
01:18:35.680
it to where it had small enough constants that was efficient in practice, then that would change
link |
01:18:41.200
the world. Okay. You think it would have like, what kind of impact? Well, okay. I mean, here's
link |
01:18:45.600
an example. I mean, you could, well, okay, just for starters, you could break basically all of
link |
01:18:51.520
the encryption that people use to protect the Internet. You could break Bitcoin and every
link |
01:18:56.160
other cryptocurrency, or, you know, mine as much Bitcoin as you wanted, right? You know, become a,
link |
01:19:02.800
you know, become a super duper billionaire, right? And then, and then plot your next move.
link |
01:19:09.040
Right. Okay. That's just for starters. That's a good point.
link |
01:19:11.280
Now, your next move might be something like, you know, you now have like a theoretically
link |
01:19:16.960
optimal way to train any neural network to find parameters for any neural network, right?
link |
01:19:22.240
So you could now say, like, is there any small neural network that generates the entire content
link |
01:19:27.840
of Wikipedia? Right? If, you know, and now the question is not, can you find it? The question
link |
01:19:33.680
has been reduced to, does that exist or not? Yes. If it does exist, then the answer would be, yes,
link |
01:19:39.360
you can find it. Okay. If you had this algorithm in your hands, okay? You could ask your computer,
link |
01:19:46.320
you know, I mean, I mean, P versus NP is one of these seven problems that carries this million
link |
01:19:50.960
dollar prize from the Clay Foundation. You know, if you solve it, you know, and others are the
link |
01:19:56.160
Riemann hypothesis, the punk array conjecture, which was solved, although the solver turned down the
link |
01:20:02.400
prize, right? And, and, and, and four others. But what I like to say, the way that we can see that
link |
01:20:07.680
P versus NP is the biggest of all of these questions is that if you had this fast algorithm, then you
link |
01:20:14.080
could solve all seven of them. Okay. You just ask your computer, you know, is there a short proof of
link |
01:20:19.200
the Riemann hypothesis, right? You know, that, that a machine could, in a language where a machine
link |
01:20:23.840
could verify it and provided that such a proof exists, then your computer finds it in a short
link |
01:20:29.120
amount of time without having to do a brute force search. Okay. So I mean, I mean, those are the
link |
01:20:33.600
stakes of what we're talking about. But I hope that also helps to give your listeners some intuition
link |
01:20:39.760
of why I and most of my colleagues would put our money on P not equaling NP.
link |
01:20:46.080
Is it possible? I apologize. This is a really dumb question, but is it possible to
link |
01:20:50.400
that a proof will come out that P equals NP, but an algorithm that makes P equals NP
link |
01:20:59.360
is impossible to find? Is that like crazy? Okay. Well, well, if P equals NP, it would mean that
link |
01:21:05.840
there is such an algorithm that it exists. Yeah. But, um, um, you know, it would, it would mean
link |
01:21:13.120
that it exists. Now, you know, in practice, normally the way that we would prove anything
link |
01:21:18.240
like that would be by finding the algorithm by finding one algorithm. But there is such a thing
link |
01:21:22.720
as a nonconstructive proof that an algorithm exists. You know, this is really only reared its head,
link |
01:21:28.560
I think a few times in the history of our field, right? But, you know, it is, it is theoretically
link |
01:21:34.560
possible that, that, that, that such a thing could happen. But, you know, there are some,
link |
01:21:39.360
even here, there are some amusing observations that one could make. So there is this famous
link |
01:21:44.480
observation of Leonid Levin, who is, you know, one of the original discoverers of NP completeness,
link |
01:21:50.480
right? And he said, well, consider the following algorithm that like I guarantee
link |
01:21:56.000
will solve the NP problems efficiently, just as provided that P equals NP. Okay. Here is what it
link |
01:22:02.720
does. It just runs, you know, it enumerates every possible algorithm in a gigantic infinite list,
link |
01:22:10.080
right? From like in like alphabetical order, right? You know, and many of them maybe won't
link |
01:22:14.960
even compile. So we just ignore those. Okay. But now we just, you know, run the first algorithm,
link |
01:22:20.720
then we run the second algorithm, we run the first one a little bit more, then we run the first three
link |
01:22:25.920
algorithms for a while, we run the first four for a while. This is called dovetailing, by the way.
link |
01:22:30.560
This is a known trick in theoretical computer science. Okay. But we do it in such a way that,
link |
01:22:37.840
you know, whatever is the algorithm out there in our list that solves NP complete, you know,
link |
01:22:44.720
the NP problems efficiently, we'll eventually hit that one, right? And now the key is that
link |
01:22:50.000
whenever we hit that one, you know, it, you know, by assumption, it has to solve the problem,
link |
01:22:55.600
has to find the solution. And once it claims to find the solution, then we can check that
link |
01:23:00.320
ourselves, right? Because these are problems, then we can check it. Now this is utterly
link |
01:23:05.920
impractical, right? You know, you'd have to do this enormous exhaustive search among all the
link |
01:23:11.920
algorithms. But from a certain theoretical standpoint, that is merely a constant prefactor.
link |
01:23:18.880
That's merely a multiplier of your running time. So there are tricks like that one can do to say
link |
01:23:23.680
that in some sense, the algorithm would have to be constructive. But you know, in the human sense,
link |
01:23:30.480
you know, it is possible that to, you know, it's conceivable that one could prove such a thing
link |
01:23:35.920
via a non constructive method. Is that likely? I don't think so. Not personally.
link |
01:23:41.600
So that's P and P, but the complexity is always full of wonderful creatures.
link |
01:23:46.800
Well, it's got about 500 of them. 500. So how do you get, yeah, what? Yeah, how do you get more?
link |
01:23:53.760
Yeah, well, okay, I mean, I mean, I mean, just for starters, there is everything that we could do
link |
01:24:00.240
with a conventional computer with a polynomial amount of memory, okay, but possibly an exponential
link |
01:24:07.040
amount of time, because we get to reuse the same memory over and over again. Okay, that is called
link |
01:24:12.640
P space. Okay, and that's actually a, we think an even larger class than NP. Okay, well, P is
link |
01:24:20.720
contained in NP, which is contained in P space. And we think that those containments are strict.
link |
01:24:26.640
And the constraint there is on the memory, the memory has to grow with polynomially with the size
link |
01:24:32.400
of the practice. That's right. That's right. But in P space, we now have interesting things that
link |
01:24:36.880
we're not in NP, like as a famous example, you know, from a given position in chess, you know,
link |
01:24:44.160
does white or black have the win? Let's say, assuming provided that the game lasts only for a
link |
01:24:49.760
reasonable number of moves, okay, or likewise for go. Okay, and, you know, even for the generalizations
link |
01:24:56.400
of these games to arbitrary size boards, because with an eight by eight board, you could say that's
link |
01:25:01.040
just a constant size problem, you just, you know, in principle, you just solve it in O of one time,
link |
01:25:06.240
right? But so we really mean the generalizations of, you know, games to arbitrary size boards here.
link |
01:25:14.160
Or another thing in P space would be, like, I give you some really hard constraint satisfaction
link |
01:25:21.920
problem, like, you know, you know, traveling salesperson or, you know, packing boxes into
link |
01:25:28.400
the trunk of your car or something like that. And I ask, not just is there a solution, which
link |
01:25:33.040
would be an NP problem, but I ask how many solutions are there? Okay, that, you know,
link |
01:25:38.640
count the number of solution of valid solutions. That that that actually gives those problems
link |
01:25:45.440
lie in a complexity class called sharp P, or like, it looks like hashtag, like hashtag P. Got it.
link |
01:25:51.840
Okay, which sits between NP and P space. There's all the problems that you can do in exponential
link |
01:25:58.720
time. Okay, that's called X. So, and by the way, it was proven in the 60s that X is larger than P.
link |
01:26:10.000
Okay, so we know that much. We know that there are problems that are solvable in exponential
link |
01:26:15.120
time that are not solvable in polynomial time. Okay. In fact, we even know more, we know that
link |
01:26:21.040
there are problems that are solvable in n cube time that are not solvable in n squared time.
link |
01:26:26.480
And that those don't help us with a controversy between P and NP.
link |
01:26:29.840
Unfortunately, it seems not or certainly not yet. Right. The the the techniques that we use
link |
01:26:36.240
to establish those things, they're very, very related to how touring proved the unsolvability
link |
01:26:40.960
of the halting problem. But they seem to break down when we're comparing two different resources,
link |
01:26:46.640
like time versus space, or like, you know, P versus NP. Okay, but, you know, I mean, there's
link |
01:26:53.040
there's what you can do with a randomized algorithm, right, that can sometimes, you know,
link |
01:26:57.680
with some has some probability of making a mistake. That's called BPP, bounded error
link |
01:27:03.360
probabilistic polynomial time. And then of course, there's one that's very close to my own heart,
link |
01:27:08.720
what you can efficiently do, do in polynomial time using a quantum computer. Okay, and that's
link |
01:27:14.480
called BQP. Right. And so, you know, what's understood about that class. Okay, so P is
link |
01:27:21.680
contained in BPP, which is contained in BQP, which is contained in P space. Okay. So anything you
link |
01:27:28.880
can, in fact, in in like, in something very similar to sharp P, BQP is basically, you know,
link |
01:27:35.360
well, it's contained in like P with the magic power to solve sharp P problems. Okay, so why is
link |
01:27:42.000
BQP contained in P space? Oh, that's an excellent question. So there there is, well, I mean, one
link |
01:27:50.800
has to prove that. Okay, but the proof you could you could think of it as using Richard Feynman's
link |
01:27:59.120
picture of quantum mechanics, which is that you can always, you know, we haven't really talked about
link |
01:28:04.400
quantum mechanics in this, in this conversation we, we did in our previous one. Yeah, we did last
link |
01:28:09.360
time. But yeah, we did last time. Okay, but, but basically, you could always think of a quantum
link |
01:28:14.800
computation as like a branching tree of possibilities, where each possible path that you could take
link |
01:28:23.920
through, you know, your the space has a complex number attached to it called an amplitude. Okay,
link |
01:28:30.080
and now the rule is, you know, when you make a measurement at the end, will you see a random
link |
01:28:35.360
answer? Okay, but quantum mechanics is all about calculating the probability that you're going to
link |
01:28:40.960
see one potential answer versus another one, right? And the rule for calculating the probability
link |
01:28:48.000
that you'll see some answer is that you have to add up the amplitudes for all of the paths that
link |
01:28:53.840
could have led to that answer. And then, you know, that's a complex number. So that, you know, how could
link |
01:29:00.000
that be a probability, then you take the squared absolute value of the result that gives you a
link |
01:29:05.040
number between zero and one. Okay, so I just, I just summarized quantum mechanics in like 30
link |
01:29:11.440
seconds. Okay, but, but now, you know, what this already tells us is that anything I can do with
link |
01:29:18.080
a quantum computer, I could simulate with a classical computer if I only have exponentially
link |
01:29:23.840
more time. Okay, and why is that? Because if I have exponential time, I could just write down this
link |
01:29:30.480
entire branching tree and just explicitly calculate each of these amplitudes, right? You know, that
link |
01:29:36.960
will be very inefficient, but it will work, right? It's enough to show that quantum computers could
link |
01:29:42.560
not solve the halting problem, or, you know, they could never do anything that is literally uncomputable
link |
01:29:48.800
in Turing sense. Okay, but now, as I said, there's even a stronger result, which says that BQP is
link |
01:29:55.680
contained in P space. The way that we prove that is that we say, if all I want is to calculate the
link |
01:30:03.520
probability of some particular output happening, you know, which is all I need to simulate a
link |
01:30:08.800
quantum computer, really, then I don't need to write down the entire quantum state, which is
link |
01:30:13.920
an exponentially large object. All I need to do is just calculate what is the amplitude for that
link |
01:30:20.720
final state. And to do that, I just have to sum up all the amplitudes that lead to that state.
link |
01:30:27.840
Okay, so that's an exponentially large sum, but I can calculate it just reusing the same memory
link |
01:30:33.920
over and over for each term in the sum. Hence the P in the P space. Hence the P space. Yeah.
link |
01:30:39.680
So what, out of that whole complexity zoo, and it could be BQP, what do you find is the most,
link |
01:30:48.160
the class that captured your heart the most? The most beautiful class, it's just, yeah.
link |
01:30:53.680
I use, as my email address, bqpqpoly at gmail.com. Yes, because bqp slash qpoly, well, you know,
link |
01:31:04.400
amazingly no one had taken it. Amazing. But, you know, but this is a class that I was involved in
link |
01:31:11.040
sort of defining proving the first theorems about in 2003 or so. So it was kind of close to my heart.
link |
01:31:18.320
But this is like, if we extended bqp, which is the class of everything we can do efficiently
link |
01:31:24.560
with a quantum computer, to allow quantum advice, which means imagine that you had some special
link |
01:31:32.000
initial state that could somehow help you do computation. And maybe such a state would be
link |
01:31:39.440
exponentially hard to prepare. But, you know, maybe somehow these states were formed in the
link |
01:31:45.600
Big Bang or something, and they've just been sitting around ever since, right? If you found one,
link |
01:31:50.000
and if this state could be like ultra power, there are no limits on how powerful it could be,
link |
01:31:56.240
except that this state doesn't know in advance which input you've got, right? It only knows the
link |
01:32:01.600
size of your input, you know, and that that's bqp slash qpoly. So that's that's one that I just
link |
01:32:07.680
personally happen to love, okay? But, you know, if you're asking like, what's the, you know,
link |
01:32:13.600
there's a class that I think is way more beautiful than, you know, or fundamental
link |
01:32:19.600
than a lot of people even within this field realize that it is. That class is called szk,
link |
01:32:26.320
or statistical zero knowledge. And, you know, there's a very, very easy way to define this
link |
01:32:32.400
class, which is to say, suppose that I have two algorithms that each sample from probability
link |
01:32:38.880
distributions, right? So each one just outputs random samples according to, you know, possibly
link |
01:32:45.360
different distributions. And now the question I ask is, you know, let's say distributions over
link |
01:32:50.960
strings of n bits, you know, so over an exponentially large space. Now I ask, are these two distributions
link |
01:32:57.760
close or far as probability distributions? Okay, any problem that can be reduced to that,
link |
01:33:04.000
you know, that can be put into that form is an szk problem. And the way that this class was
link |
01:33:10.320
originally discovered was completely different from that. And it was kind of more complicated.
link |
01:33:14.960
It was discovered as the class of all of the problems that have a certain kind of what's
link |
01:33:20.640
called zero knowledge proof. The zero knowledge proofs are one of the central ideas in cryptography.
link |
01:33:27.680
You know, Shafi Goldwasser and Silvio McCauley won the Turing Award for, you know, inventing them.
link |
01:33:33.200
And they're at the core of even some some cryptocurrencies that, you know, people people
link |
01:33:38.240
use nowadays. But there are zero knowledge proofs or ways of proving to someone that something is
link |
01:33:45.520
true, like, you know, that there is a solution to this, you know, optimization problem or that
link |
01:33:53.280
these two graphs are isomorphic to each other or something. But without revealing why it's true,
link |
01:33:59.520
without revealing anything about why it's true. Okay, szk is all of the problems for which there
link |
01:34:06.720
is such a proof that doesn't rely on any cryptography. Okay. And if you wonder, like, how could such a
link |
01:34:13.840
thing possibly exist, right? Well, I can imagine that I had two graphs, and I wanted to convince you
link |
01:34:21.040
that these two graphs are not isomorphic, meaning, you know, I cannot permute one of them so that
link |
01:34:26.560
it's the same as the other one, right? You know, that might be a very hard statement to prove,
link |
01:34:31.120
right? I might need, you know, you might have to do a very exhaustive enumeration of, you know,
link |
01:34:35.840
all the different permutations before you were convinced that it was true. But what if there
link |
01:34:40.320
were some all knowing wizard that said to you, look, I'll tell you what, just pick one of the
link |
01:34:45.920
graphs randomly, then randomly permute it, then send it to me, and I will tell you which graph
link |
01:34:52.160
you started with. Okay, and I will do that every single time. Right? And let's load that in. Okay,
link |
01:34:59.280
that's got it. I got it. And let's say that that wizard did that 100 times, and it was right every
link |
01:35:04.880
time, right? Now, if the graphs were isomorphic, then, you know, it would have been flipping a coin
link |
01:35:10.080
each time, right? It would have had only a one in two to the 100 power chance of, you know,
link |
01:35:15.600
of guessing right each time. But, you know, so, so if it's right every time, then now you're
link |
01:35:20.640
statistically convinced that these graphs are not isomorphic, even though you've learned nothing
link |
01:35:25.760
new about why they aren't so fascinating. So yeah, so so SDK is all of the problems that have
link |
01:35:31.520
protocols like that one. But it has this beautiful other characterization. It's shown up again and
link |
01:35:37.600
again in my in my own work and you know, a lot of people's work. And I think that it really is one
link |
01:35:42.800
of the most fundamental classes. It's just that people didn't realize that when it was first
link |
01:35:46.960
discovered. So we're living in the middle of a pandemic currently. Yeah. How has your life
link |
01:35:54.080
been changed? Or no, better to ask, like, how is your perspective of the world change with this
link |
01:36:00.400
world changing event of a pandemic over taking the entire world? Yeah, well, I mean, I mean,
link |
01:36:05.200
all of our lives have changed, you know, like, I guess, as with no other event since I was born,
link |
01:36:11.200
you know, you would have to go back to World War Two for something I think of this magnitude,
link |
01:36:15.920
you know, on, you know, the way that we live our lives. As for how it has changed my worldview,
link |
01:36:22.320
I think that the the failure of institutions, you know, like, like, like the CDC, like, you know,
link |
01:36:30.240
other institutions that we sort of thought were trustworthy, like a lot of the media
link |
01:36:36.160
was staggering was was absolutely breathtaking. It is something that I would not have predicted.
link |
01:36:43.200
Right. I think I wrote on my blog that, you know, you know, it's it's it's fascinating to, like,
link |
01:36:50.400
rewatch the movie Contagion from a decade ago, right, that correctly foresaw so many aspects of,
link |
01:36:58.080
you know, what was going on, you know, a an airborne, you know, virus originates in China,
link |
01:37:04.320
spreads to, you know, much of the world, you know, shuts everything down until a vaccine can be
link |
01:37:09.680
developed. You know, everyone has to stay at home, you know, you know, it gets, you know,
link |
01:37:15.200
an enormous number of things right. Okay. But the one thing that they could not imagine,
link |
01:37:20.320
you know, is that like in this movie, everyone from the government is like hyper competent,
link |
01:37:25.840
hyper, you know, dedicated to the public good, right? And the best of the best, you know, yeah,
link |
01:37:30.640
they're the best of the best, you know, they could, you know, and there are these conspiracy
link |
01:37:35.200
theorists, right, who think, you know, you know, this is all fake news, there's no there's not
link |
01:37:40.640
really a pandemic. And those are some random people on the internet, who are the hyper competent
link |
01:37:45.520
government people have to, you know, oppose, right? They, you know, in trying to envision the
link |
01:37:50.720
worst thing that could happen, like, you know, the the there was a failure of imagination,
link |
01:37:55.920
the movie makers did not imagine that the conspiracy theorists and the, you know, and the
link |
01:38:02.000
incompetence in the nutcases would have captured our institutions and be the ones actually running
link |
01:38:07.600
things. So you had a certain I love competence in all walks of life. I love so much energy. I'm
link |
01:38:14.480
so excited, but people do amazing job. And I like you. Well, maybe you can clarify, but I had maybe
link |
01:38:20.800
not intuition, but I hope that government is best could be ultra competent. What, first of all,
link |
01:38:28.160
two questions, like, how do you explain lack of confidence? And the other maybe on the positive
link |
01:38:32.960
side, how can we build a more competent government? Well, there's an election in two months. I mean,
link |
01:38:39.680
you know, you have a faith that the election, I, you know, it's not going to fix everything. But,
link |
01:38:45.120
you know, it's like, I feel like there is a ship that is sinking, and you could at least stop the
link |
01:38:49.200
sinking. But, you know, I think that there are there are much, much deeper problems. I mean,
link |
01:38:54.560
I think that, you know, it is it is plausible to me that, you know, a lot of the the failures,
link |
01:39:01.760
you know, with the CDC, with some of the other health agencies, even, you know, you know,
link |
01:39:07.120
predate Trump, you know, predate the, you know, right wing populism that is sort of taken over
link |
01:39:13.040
much of the world now. And, you know, I think that, you know, it was is, you know, it is very
link |
01:39:21.280
I'm actually, you know, I've actually been strongly in favor of, you know, rushing vaccines of,
link |
01:39:29.360
you know, I thought that we could have done, you know, human human challenge trials,
link |
01:39:34.080
you know, which were not done, right, we could have, you know, like I had, you know, volunteers,
link |
01:39:39.120
you know, to actually, you know, be, you know, get vaccines, get, you know, exposed to COVID.
link |
01:39:46.240
So, you know, innovative ways of accelerating what we've done previously over a long
link |
01:39:50.240
time. I thought that, you know, each, each month that you, that, that a vaccine is, is closer
link |
01:39:55.200
is like trillions of dollars. Are you surprised? And of course, lives, you know, at least, you
link |
01:40:00.640
know, hundreds of thousands of lives. Are you surprised that it's taking this long? We still
link |
01:40:04.960
don't have a plan. They're still not a feeling like anyone is actually doing anything in terms of
link |
01:40:10.880
alleviate like any kind of plan. So there's a bunch of stuff, this vaccine, but you could also do
link |
01:40:16.080
a testing infrastructure where everybody's tested nonstop with contact tracing, all that kind of
link |
01:40:21.280
stuff. Well, I mean, I'm as surprised as almost everyone else. I mean, this is a historic failure.
link |
01:40:26.800
It is one of the biggest failures in the 240 year history of the United States, right? And we should
link |
01:40:33.360
be, you know, crystal clear about that. And, you know, one thing that I think has been missing,
link |
01:40:38.960
you know, even, even from the more competent side is like, you know, is sort of the, the World War II
link |
01:40:45.840
mentality, right? The, you know, the mentality of, you know, let, let's just, you know, you know,
link |
01:40:52.960
if, if, if we can, by breaking a whole bunch of rules, you know, get a vaccine and, you know,
link |
01:40:59.120
and even half the amount of time as we thought, then let's just do that because, you know,
link |
01:41:04.480
you, you know, like, like we have to, we have to weigh all of the moral qualms that we have about
link |
01:41:10.480
doing that against the moral qualms of not doing. And one key little aspect of that, that's deeply
link |
01:41:16.720
important to me and we'll go in that topic next is the World War II mentality wasn't just about,
link |
01:41:22.960
you know, breaking all the rules to get the job done. There was a togetherness to it. There's,
link |
01:41:26.960
so I would, if I were president right now, it seems quite elementary to unite the country
link |
01:41:35.840
because we're facing a crisis. It's easy to make the virus the enemy. And it's very surprising to
link |
01:41:41.840
me that the division, the division has increased as opposed to decreased. That's, that's hard
link |
01:41:48.480
breaking. Yeah. Well, look, I mean, it's been said by others that this is the first time in the
link |
01:41:52.400
country's history that we have a president who does not even pretend to, you know, want to unite
link |
01:41:57.920
the country, right? Yeah. You know, I mean, I mean, Lincoln, who fought a civil war, you know,
link |
01:42:04.400
you know, said he wanted to unite the country, right? You know, and, and I do, I do worry enormously
link |
01:42:11.760
about what happens if the results of this election are contested, you know, and, you know, will there
link |
01:42:17.920
be violence as a result of that? And will we have a clear path of succession? And, you know,
link |
01:42:23.440
look, I mean, you know, this is all we're, we're going to find out the answers to this in two
link |
01:42:27.840
months. And if none of that happens, maybe I'll look foolish, but I am willing to go on the record
link |
01:42:32.640
and say, I am terrified about that. We have been reading the rise and fall, the third right.
link |
01:42:39.360
So if I can, this, this is like one little voice just to put out there that I think November
link |
01:42:46.400
will be a really critical month for people to breathe and put love out there. Do not,
link |
01:42:53.920
you know, anger in those, in that context, no matter who wins, no matter what is said,
link |
01:42:59.760
will destroy our country, may destroy our country, may destroy the world because of the power of
link |
01:43:04.960
the country. So it's really important to be patient, loving, empathetic. Like one of the
link |
01:43:10.640
things that it troubles me is that even people on the left are unable to have a love and respect
link |
01:43:16.960
for people who voted for Trump. They can't imagine that there's good people that could vote for the
link |
01:43:21.840
opposite side. And that's... Oh, I know there are, because I know some of them, right? I mean,
link |
01:43:26.320
you know, it's still, you know, maybe it baffles me, but, you know, I know such people.
link |
01:43:31.840
Let me ask you this. It's also heartbreaking to me on the topic of cancel culture. So in the
link |
01:43:37.120
machine learning community, I've seen it a little bit, that there's aggressive attacking of people
link |
01:43:44.800
who are trying to have a nuanced conversation about things. And it's troubling because it feels like
link |
01:43:52.400
nuanced conversation is the only way to talk about difficult topics. And when there's a thought
link |
01:43:58.240
police and speech police on any nuanced conversation that everybody has to, like in an animal farm,
link |
01:44:05.600
chant that racism is bad and sexism is bad, which is things that everybody believes.
link |
01:44:11.520
And they can't possibly say anything nuanced. It feels like it goes against any kind of progress
link |
01:44:17.520
from my kind of shallow perspective. But you've written a little bit about cancel culture. Do
link |
01:44:22.560
you have thoughts that are interesting to say about this? Well, I mean, to say that I am opposed to,
link |
01:44:28.000
you know, this trend of cancellations or of, you know, shouting people down rather than engaging
link |
01:44:34.640
them, that would be a massive understatement, right? And I feel like, you know, I have put my
link |
01:44:40.320
money where my mouth is, you know, not as much as some people have, but, you know, I've tried to do
link |
01:44:46.160
something. I mean, I have defended, you know, some unpopular people and unpopular, you know,
link |
01:44:52.240
ideas on my blog. I've, you know, tried to defend, you know, norms of open discourse, of, you know,
link |
01:45:02.160
reasoning with our opponents, even when I've been shouted down for that on social media,
link |
01:45:07.760
you know, called a racist, called a sexist, all of those things. And which, by the way, I should
link |
01:45:12.160
say, you know, I would be perfectly happy to, you know, say, you know, if we had time to say,
link |
01:45:17.040
you know, you know, 10,000 times, you know, my hatred of racism, of sexism, of homophobia,
link |
01:45:24.880
right? But what I don't want to do is to cede to some particular political faction the right to
link |
01:45:32.800
define exactly what is meant by those terms to say, well, then you have to agree with all of these
link |
01:45:38.720
other extremely contentious positions, or else you are a misogynist, or else you are a racist,
link |
01:45:45.440
right? I say that, well, no, you know, you know, don't like, don't I or, you know, don't people
link |
01:45:52.560
like me also get a say in the discussion about, you know, what is racism, about what is going to
link |
01:45:58.320
be the most effective to combat racism, right? And, you know, this cancellation mentality,
link |
01:46:05.840
I think, is spectacularly ineffective at its own professed goal of, you know, combating racism and
link |
01:46:12.400
sexism. What's a positive way out? So I try to, I don't know if you see what I do on Twitter,
link |
01:46:19.440
but on Twitter, I mostly, in my whole, in my life, I've actually, it's who I am to the core,
link |
01:46:25.520
is like, I really focus on the positive, and I try to put love out there in the world. And still,
link |
01:46:32.720
I get attacked. And I look at that and I wonder like, you too, I didn't know, like, I haven't
link |
01:46:38.880
actually said anything difficult and nuanced. You talk about somebody like Steven Pinker,
link |
01:46:44.960
who I actually don't know the full range of things that that he's attacked for, but he tries to say
link |
01:46:52.320
difficult, he tries to be thoughtful about difficult topics. He does. And obviously he
link |
01:46:57.440
just gets slaughtered by, well, I mean, I mean, I mean, I mean, yes, but it's also amazing how well
link |
01:47:04.880
Steve has withstood it. I mean, he just survived an attempt to cancel him just a couple of months
link |
01:47:10.080
ago, right? Psychologically, he survives it too, which worries me because I don't think I can.
link |
01:47:15.360
Yeah, I've gotten to know Steve a bit. He is incredibly unperturbed by this stuff.
link |
01:47:20.960
And I admire that and I envy it. I wish that I could be like that. I mean, my impulse when I'm
link |
01:47:26.400
getting attacked is I just want to engage every single, like, anonymous person on Twitter and Reddit
link |
01:47:32.960
who is saying mean stuff about me. And I want to say, well, look, can we just talk this over for
link |
01:47:38.000
an hour? And then, you know, you'll see that I'm not that bad. And, you know, sometimes that even
link |
01:47:42.720
works. The problem is then there's the, you know, the 20,000 other ones, right? And that's not, but
link |
01:47:48.960
psychologically, does that wear on you? It does. It does. But yeah, I mean, in terms of what is the
link |
01:47:54.320
solution, I mean, I wish I knew, right? And so, you know, in a certain way, these problems are
link |
01:47:59.520
maybe harder than P versus NP, right? I mean, you know, but I think that part of it has to be for,
link |
01:48:07.200
you know, that I think that there's a lot of sort of silent support for what I'll call the open
link |
01:48:13.600
discourse side, the, you know, reasonable enlightenment side. And I think that that support
link |
01:48:19.280
has to become less silent, right? I think that a lot of people, they sort of, you know, like agree
link |
01:48:25.840
that a lot of these cancellations and attacks are ridiculous, but are just afraid to say so,
link |
01:48:32.400
right? Or else they'll get shouted down as well, right? That's just the standard witch hunt dynamic,
link |
01:48:38.000
which, you know, of course, this, you know, this faction understands and exploits to its great
link |
01:48:43.040
advantage. But, you know, more people just, you know, said, you know, like, we're not going to
link |
01:48:49.600
stand for this, right? You know, this is, you know, guess what? We're against racism, too. But,
link |
01:48:56.720
you know, this, you know, what you're doing is ridiculous, right? You know, and the hard part is,
link |
01:49:02.080
like, it takes a lot of mental energy. It takes a lot of time, you know, even if you feel like
link |
01:49:07.120
you're not going to be canceled or, you know, you're staying on the safe side, like, it takes a lot
link |
01:49:11.760
of time to, to, to phrase things in exactly the right way and to, you know, respond to everything
link |
01:49:18.720
people say. So, but I think that, you know, the more people speak up than, you know, from, from,
link |
01:49:25.360
from all political persuasions, you know, from like all, you know, walks of life, then, you know,
link |
01:49:30.640
the easier it is to move forward. Since we've been talking about love, can you, last time I
link |
01:49:38.160
talked to you about the meaning of life a little bit, but here has, it's a weird question to ask
link |
01:49:43.840
a computer scientist, but has love for other human beings, for, for things, for the world around you
link |
01:49:51.520
played an important role in your life? Have you, you know, it's easy for a world class
link |
01:49:59.440
computer scientist, you could even call yourself like a physicist, everything to be lost in the
link |
01:50:06.160
books. Is the connection to other humans, love for other humans played an important role?
link |
01:50:11.040
Well, I love my kids. I love my wife. I love my parents. You know, I am probably not, not
link |
01:50:23.280
different from most people in loving their families. And, and in that being very important
link |
01:50:29.600
in my life. Now, I should remind you that, you know, I am a theoretical computer scientist.
link |
01:50:36.320
If you're looking for deep insight about the nature of love, you're probably looking in the
link |
01:50:40.640
wrong place to ask me, but, but sure, it's been important. But is it, is there something from
link |
01:50:48.000
a computer science perspective to be said about love? Is there, or is that, is that even beyond
link |
01:50:53.600
into the realm of, beyond the realm of consciousness and all that? There was, there was this great
link |
01:50:58.960
cartoon. I think it was one of the classic XKCDs where it's, it shows like a heart and it's like,
link |
01:51:05.120
you know, squaring the heart, taking the four year transform of the heart, you know, integrating the
link |
01:51:10.720
heart, you know, you know, each, each thing. And then it says, you know, my normal approach is useless
link |
01:51:16.640
here. I'm so glad I asked this question. I think there's no better way to, to end this. I hope we
link |
01:51:23.920
get a chance to talk again. This is an amazing, cool experiment to do it outside. And I'm really
link |
01:51:28.160
glad you made it out. Yeah. Well, I appreciate it a lot. It's been a pleasure. And I'm glad you
link |
01:51:32.480
were able to come out to Austin. Thanks. Thanks for listening to this conversation with Scott
link |
01:51:37.600
Aronson. And thank you to our sponsors, Aidsleep, SimplySafe, ExpressVPN, and BetterHelp. Please
link |
01:51:46.080
check out these sponsors in the description to get a discount and to support this podcast.
link |
01:51:52.560
If you enjoy this thing, subscribe on YouTube, review it with five stars and up a podcast,
link |
01:51:57.200
follow on Spotify, support on Patreon, or connect with me on Twitter at Lex Freedman.
link |
01:52:03.600
And now let me leave you with some words from Scott Aronson that I also gave to you in the
link |
01:52:08.320
introduction, which is if you always win, then you're probably doing something wrong. Thank you
link |
01:52:15.760
for listening and for putting up with the intro and outro in this strange room in the middle of
link |
01:52:21.600
nowhere. And I very much hope to see you next time in many more ways than one.