back to index

Scott Aaronson: Computational Complexity and Consciousness | Lex Fridman Podcast #130


small model | large model

link |
00:00:00.000
The following is a conversation with Scott Aaronson, his second time on the podcast.
link |
00:00:04.960
He is a professor at UT Austin, director of the Quantum Information Center,
link |
00:00:10.320
and previously a professor at MIT. Last time we talked about quantum computing. This time
link |
00:00:17.680
we talk about computation complexity, consciousness, and theories of everything.
link |
00:00:23.280
I'm recording this intro, as you may be able to tell, in a very strange room in the middle of the
link |
00:00:31.280
night. I'm not really sure how I got here or how I'm going to get out, but Hunter S. Thompson
link |
00:00:39.280
saying I think applies to today and the last few days and actually the last couple of weeks.
link |
00:00:46.080
Life should not be a journey to the grave with the intention of arriving safely in a pretty and well
link |
00:00:51.440
preserved body, but rather to skid in broadside in a cloud of smoke, thoroughly used up, totally
link |
00:00:59.440
worn out, and loudly proclaiming, wow, what a ride. So I figured whatever I'm up to here,
link |
00:01:08.880
and yes, lots of wine is involved, I'm going to have to improvise, have to improvise,
link |
00:01:14.480
have to improvise, hence this recording. Okay, quick mention of each sponsor,
link |
00:01:20.320
followed by some thoughts related to the episode. First sponsor is SimpliSafe, a home security
link |
00:01:25.520
company I use to monitor and protect my apartment, though of course I'm always prepared with a fall
link |
00:01:32.880
back plan, as a man in this world must always be. Second sponsor is 8sleep, a mattress that cools
link |
00:01:43.520
itself, measures heart rate variability, has a nap, and has given me yet another reason to look
link |
00:01:50.160
forward to sleep, including the all important power nap. Third sponsor is ExpressVPN, the VPN
link |
00:01:57.920
I've used for many years to protect my privacy on the internet. Finally, the fourth sponsor is Better
link |
00:02:05.040
Help, online therapy when you want to face your demons with a licensed professional, not just
link |
00:02:11.440
by doing David Goggins like physical challenges like I seem to do on occasion. Please check out
link |
00:02:17.280
these sponsors in the description to get a discount and to support the podcast.
link |
00:02:22.800
As a side note, let me say that this is the second time I've recorded a conversation outdoors.
link |
00:02:28.400
The first one was with Steven Wolfram when it was actually sunny out, in this case it was raining,
link |
00:02:34.160
which is why I found a covered outdoor patio. But I learned a valuable lesson, which is that
link |
00:02:40.640
raindrops can be quite loud on the hard metal surface of a patio cover. I did my best with
link |
00:02:47.120
the audio, I hope it still sounds okay to you. I'm learning, always improving. In fact, as Scott says,
link |
00:02:55.440
if you always win, then you're probably doing something wrong. To be honest, I get pretty upset
link |
00:03:00.720
with myself when I fail, small or big, but I've learned that this feeling is priceless. It can be
link |
00:03:08.240
fuel, when channeled into concrete plans of how to improve. So if you enjoy this thing, subscribe
link |
00:03:16.080
on YouTube, review the Five Stars in Apple podcast, follow on Spotify, support on Patreon,
link |
00:03:22.480
or connect with me on Twitter at Lex Friedman. And now, here's my conversation with Scott Aaronson.
link |
00:03:30.320
Let's start with the most absurd question, but I've read you write some fascinating stuff about
link |
00:03:34.960
it, so let's go there. Are we living in a simulation? What difference does it make,
link |
00:03:40.720
Lex? I mean, I'm serious. What difference? Because if we are living in a simulation,
link |
00:03:46.640
it raises the question, how real does something have to be in simulation for it to be sufficiently
link |
00:03:52.640
immersive for us humans? But I mean, even in principle, how could we ever know if we were in
link |
00:03:57.760
one, right? A perfect simulation, by definition, is something that's indistinguishable from the
link |
00:04:02.640
real thing. Well, we didn't say anything about perfect. No, no, that's right. Well, if it was
link |
00:04:07.440
an imperfect simulation, if we could hack it, find a bug in it, then that would be one thing,
link |
00:04:13.040
right? If this was like The Matrix and there was a way for me to do flying kung fu moves or
link |
00:04:19.840
something by hacking the simulation, well then we would have to cross that bridge when we came to
link |
00:04:24.400
it, wouldn't we? At that point, it's hard to see the difference between that and just what people
link |
00:04:33.360
would ordinarily refer to as a world with miracles. What about from a different perspective, thinking
link |
00:04:39.440
about the universe as a computation, like a program running on a computer? That's kind of
link |
00:04:44.560
a neighboring concept. It is. It is an interesting and reasonably well defined question to ask,
link |
00:04:50.480
is the world computable? Does the world satisfy what we would call in CS the church touring
link |
00:04:57.760
thesis? That is, could we take any physical system and simulate it to any desired precision by a
link |
00:05:07.040
touring machine, given the appropriate input data, right? And so far, I think the indications are
link |
00:05:13.920
pretty strong that our world does seem to satisfy the church touring thesis. At least if it doesn't,
link |
00:05:20.240
then we haven't yet discovered why not. But now, does that mean that our universe is a simulation?
link |
00:05:27.360
Well, that word seems to suggest that there is some other larger universe in which it is running.
link |
00:05:34.800
And the problem there is that if the simulation is perfect, then we're never going to be able to get
link |
00:05:40.880
any direct evidence about that other universe. We will only be able to see the effects of the
link |
00:05:47.760
computation that is running in this universe. Well, let's imagine an analogy. Let's imagine
link |
00:05:53.680
a PC, a personal computer, a computer. Is it possible with the advent of artificial intelligence
link |
00:06:01.280
for the computer to look outside of itself to see, to understand its creator? I mean,
link |
00:06:08.880
that's a simple, is that a ridiculous analogy? Well, I mean, with the computers that we actually
link |
00:06:14.800
have, I mean, first of all, we all know that humans have done an imperfect job of enforcing
link |
00:06:23.520
the abstraction boundaries of computers, right? Like you may try to confine some program to a
link |
00:06:29.840
playpen, but as soon as there's one memory allocation error in the C program, then the
link |
00:06:37.680
program has gotten out of that playpen and it can do whatever it wants, right? This is how most hacks
link |
00:06:43.040
work, you know, viruses and worms and exploits. And, you know, you would have to imagine that an
link |
00:06:49.680
AI would be able to discover something like that. Now, you know, of course, if we could actually
link |
00:06:55.360
discover some exploit of reality itself, then, you know, then this whole, I mean, then in some
link |
00:07:02.960
sense we wouldn't have to philosophize about this, right? This would no longer be a metaphysical
link |
00:07:08.480
conversation. But the question is, what would that hack look like? Yeah, well, I have no idea. I mean,
link |
00:07:18.400
Peter Shor, you know, the very famous person in quantum computing, of course, has joked that
link |
00:07:25.760
maybe the reason why we haven't yet, you know, integrated general relativity in quantum mechanics
link |
00:07:31.440
is that, you know, the part of the universe that depends on both of them was actually left
link |
00:07:36.160
unspecified. And if we ever tried to do an experiment involving the singularity of a black
link |
00:07:42.640
hole or something like that, then, you know, the universe would just generate an overflow error or
link |
00:07:47.840
something, right? Yeah, we would just crash the universe. Now, you know, the universe, you know,
link |
00:07:55.440
has seemed to hold up pretty well for, you know, 14 billion years, right? So, you know, my, you know,
link |
00:08:03.120
a Occam's razor kind of guess has to be that, you know, it will continue to hold up, you know,
link |
00:08:09.760
that the fact that we don't know the laws of physics governing some phenomenon is not a strong
link |
00:08:15.520
sign that probing that phenomenon is going to crash the universe, right? But, you know, of course,
link |
00:08:21.600
I could be wrong. But do you think on the physics side of things, you know, there's been recently a
link |
00:08:28.000
few folks, Eric Weinstein and Stephen Wolfram that came out with a theory of everything. I think
link |
00:08:33.520
there's a history of physicists dreaming and working on the unification of all the laws of
link |
00:08:39.600
physics. Do you think it's possible that once we understand more physics, not necessarily the
link |
00:08:46.320
unification of the laws, but just understand physics more deeply at the fundamental level,
link |
00:08:50.480
we'll be able to start, you know, I mean, part of this is humorous, but looking to see if there's
link |
00:08:58.000
any bugs in the universe that could be exploited for, you know, traveling at not just speed of
link |
00:09:05.280
light, but just traveling faster than our current spaceships can travel, all that kind of stuff.
link |
00:09:10.240
Well, I mean, to travel faster than our current spaceships could travel, you wouldn't need to
link |
00:09:15.440
find any bug in the universe, right? The known laws of physics, you know, let us go much faster
link |
00:09:20.880
up to the speed of light, right? And, you know, when people want to go faster than the speed of
link |
00:09:25.680
light, well, we actually know something about what that would entail, namely that, you know,
link |
00:09:30.800
according to relativity, that seems to entail communication backwards in time. Okay, so then
link |
00:09:36.800
you have to worry about closed time like curves and all of that stuff. So, you know, in some sense,
link |
00:09:41.600
we sort of know the price that you have to pay for these things, right?
link |
00:09:45.920
But under the current understanding of physics.
link |
00:09:48.400
That's right. That's right. We can't, you know, say that they're impossible, but we, you know,
link |
00:09:53.040
we know that sort of a lot else in physics breaks, right? So, now regarding Eric Weinstein
link |
00:10:01.200
and Stephen Wolfram, like, I wouldn't say that either of them has a theory of everything. I
link |
00:10:06.240
would say that they have ideas that they hope, you know, could someday lead to a theory of everything.
link |
00:10:11.840
Is that a worthy pursuit?
link |
00:10:13.120
Well, I mean, certainly, let's say by theory of everything, you know, we don't literally mean a
link |
00:10:18.640
theory of cats and of baseball and, you know, but we just mean it in the more limited sense of
link |
00:10:24.800
everything, a fundamental theory of physics, right? Of all of the fundamental interactions of
link |
00:10:31.600
physics, of course, such a theory, even after we had it, you know, would leave the entire question
link |
00:10:38.800
of all the emergent behavior, right? You know, to be explored. So, it's only everything for a
link |
00:10:45.840
specific definition of everything. Okay, but in that sense, I would say, of course, that's worth
link |
00:10:50.320
pursuing. I mean, that is the entire program of fundamental physics, right? All of my friends who
link |
00:10:56.240
do quantum gravity, who do string theory, who do anything like that, that is what's motivating them.
link |
00:11:02.160
Yeah, it's funny, though, but, I mean, Eric Weinstein talks about this. It is, I don't know
link |
00:11:06.960
much about the physics world, but I know about the AI world, and it is a little, it is a little bit
link |
00:11:11.920
taboo to talk about AGI, for example, on the AI side. So, really, to talk about the big dream of
link |
00:11:22.880
the community, I would say, because it seems so far away, it's almost taboo to bring it up, because,
link |
00:11:29.760
you know, it's seen as the kind of people that dream about creating a truly superhuman level
link |
00:11:34.320
intelligence. That's really far out there, people, because we're not even close to that. And it feels
link |
00:11:40.080
like the same thing is true for the physics community. I mean, Stephen Hawking certainly
link |
00:11:45.440
talked constantly about theory of everything, right? You know, I mean, people, you know,
link |
00:11:51.920
use those terms who were, you know, some of the most respected people in the whole world of
link |
00:11:57.760
physics, right? But, I mean, I think that the distinction that I would make is that people
link |
00:12:03.040
might react badly if you use the term in a way that suggests that you, you know, thinking about
link |
00:12:09.760
it for five minutes, have come up with this major new insight about it, right? It's difficult. Stephen
link |
00:12:16.320
Hawking is not a great example, because I think you can do whatever the heck you want when you
link |
00:12:23.200
get to that level. And I certainly see, like, senior faculty, you know, that, you know, at that
link |
00:12:29.280
point, that's one of the nice things about getting older is you stop giving a damn. But
link |
00:12:35.760
community as a whole, they tend to roll their eyes very quickly at stuff that's outside the
link |
00:12:40.560
quote unquote mainstream. Well, let me put it this way. I mean, if you asked, you know,
link |
00:12:44.720
Ed Witten, let's say, who is, you know, you might consider the leader of the string community,
link |
00:12:49.680
and thus, you know, very, very mainstream, in a certain sense, but he would have no hesitation
link |
00:12:54.960
in saying, you know, of course, you know, they're looking for a, you know, you know, a unified
link |
00:13:01.840
description of nature of, you know, of general relativity of quantum mechanics of all the
link |
00:13:07.200
fundamental interactions of nature, right? Now, you know, whether people would call that a theory
link |
00:13:13.280
of everything, whether they would use that term, that might vary. You know, Lenny Susskind would
link |
00:13:18.480
definitely have no problem telling you that, you know, if that's what we want, right?
link |
00:13:21.920
TK For me, who loves human beings and psychology,
link |
00:13:25.760
it's kind of ridiculous to say a theory that unifies the laws of physics gets you to understand
link |
00:13:33.520
everything. I would say you're not even close to understanding everything.
link |
00:13:36.640
TK Yeah, right. I mean, the word everything is a little ambiguous here. And then people will get
link |
00:13:43.200
into debates about, you know, reductionism versus emergentism and blah, blah, blah. And so in not
link |
00:13:50.480
wanting to say theory of everything, people might just be trying to short circuit that debate and
link |
00:13:55.600
say, you know, look, you know, yes, we want a fundamental theory of, you know, the particles
link |
00:14:01.040
and interactions of nature.
link |
00:14:02.320
TK Let me bring up the next topic that people don't want to mention, although they're getting
link |
00:14:05.680
more comfortable with it, is consciousness. You mentioned that you have a talk on consciousness
link |
00:14:10.160
that I watched five minutes of, but the internet connection was really bad.
link |
00:14:13.920
TK Was this my talk about, you know, refuting the integrated information theory?
link |
00:14:18.560
TK Yes.
link |
00:14:18.800
TK Which was a particular account of consciousness that, yeah, I think one can just show it doesn't
link |
00:14:22.960
work. Much harder to say what does work.
link |
00:14:25.520
TK Let me ask, maybe it'd be nice to comment on, you talk about also like the semi hard problem
link |
00:14:34.240
of consciousness or like almost hard problem or kind of hard.
link |
00:14:36.720
TK Pretty hard problem, I think I call it.
link |
00:14:38.560
TK So maybe can you talk about that, their idea of the approach to modeling consciousness and
link |
00:14:47.200
why you don't find it convincing? What is it, first of all?
link |
00:14:49.680
TK Okay, well, so what I called the pretty hard problem of consciousness, this is my term,
link |
00:14:55.920
although many other people have said something equivalent to this, okay? But it's just, you know,
link |
00:15:02.400
the problem of, you know, giving an account of just which physical systems are conscious and
link |
00:15:09.840
which are not. Or, you know, if there are degrees of consciousness, then quantifying how conscious
link |
00:15:15.840
a given system is.
link |
00:15:16.960
TK Oh, awesome. So that's the pretty hard problem.
link |
00:15:19.200
TK Yeah, that's what I mean.
link |
00:15:20.240
TK That's it. I'm adopting it. I love it. That's a good ring to it.
link |
00:15:23.520
TK And so, you know, the infamous hard problem of consciousness is to explain how something
link |
00:15:29.440
like consciousness could arise at all, you know, in a material universe, right? Or, you know,
link |
00:15:34.560
why does it ever feel like anything to experience anything, right? And, you know, so I'm trying to
link |
00:15:40.880
distinguish from that problem, right? And say, you know, no, okay, I would merely settle for an
link |
00:15:46.880
account that could say, you know, is a fetus conscious? You know, if so, at which trimester?
link |
00:15:52.560
You know, is a dog conscious? You know, what about a frog, right?
link |
00:15:58.160
TK Or even as a precondition, you take that both these things are conscious,
link |
00:16:02.080
tell me which is more conscious.
link |
00:16:03.680
TK Yeah, for example, yes. Yeah, yeah. I mean, if consciousness is some multidimensional vector,
link |
00:16:09.360
well, just tell me in which respects these things are conscious and in which respect they aren't,
link |
00:16:14.320
right? And, you know, and have some principled way to do it where you're not, you know,
link |
00:16:19.040
carving out exceptions for things that you like or don't like, but could somehow take a description
link |
00:16:24.800
of an arbitrary physical system, and then just based on the physical properties of that system,
link |
00:16:32.080
or the informational properties, or how it's connected, or something like that,
link |
00:16:36.800
just in principle, calculate, you know, its degree of consciousness, right? I mean, this,
link |
00:16:42.240
this would be the kind of thing that we would need, you know, if we wanted to address questions,
link |
00:16:47.280
like, you know, what does it take for a machine to be conscious, right? Or when are, you know,
link |
00:16:52.480
when should we regard AIs as being conscious? So now this IIT, this integrated information theory,
link |
00:17:01.920
which has been put forward by Giulio Tinoni and a bunch of his
link |
00:17:09.680
collaborators over the last decade or two, this is noteworthy, I guess, as a direct attempt to
link |
00:17:17.920
answer that question, to, you know, answer the, to address the pretty hard problem,
link |
00:17:22.640
right? And they give a, a criterion that's just based on how a system is connected. So you,
link |
00:17:29.840
so it's up to you to sort of abstract the system, like a brain or a microchip, as a collection of
link |
00:17:36.640
components that are connected to each other by some pattern of connections, you know, and,
link |
00:17:41.600
and to specify how the components can influence each other, you know, like where the inputs go,
link |
00:17:48.000
you know, where they affect the outputs. But then once you've specified that,
link |
00:17:51.920
then they give this quantity that they call phi, you know, the Greek letter phi.
link |
00:17:56.800
And the definition of phi has actually changed over time. It changes from one paper to another,
link |
00:18:02.880
but in all of the variations, it involves something about what we in computer science
link |
00:18:08.560
would call graph expansion. So basically what this means is that they want, in order to get a
link |
00:18:14.800
large value of phi, it should not be possible to take your system and partition it into two
link |
00:18:22.080
components that are only weakly connected to each other. Okay. So whenever we take our system and
link |
00:18:28.800
sort of try to split it up into two, then there should be lots and lots of connections going
link |
00:18:33.520
between the two components. Okay. Well, I understand what that means on a graph.
link |
00:18:37.200
Do they formalize what, how to construct such a graph or data structure, whatever,
link |
00:18:44.160
or is this one of the criticism I've heard you kind of say is that a lot of the very interesting
link |
00:18:50.560
specifics are usually communicated through like natural language, like through words.
link |
00:18:56.880
So it's like the details aren't always clear. Well, it's true. I mean, they have nothing even
link |
00:19:02.560
resembling a derivation of this phi. Okay. So what they do is they state a whole bunch of postulates,
link |
00:19:09.920
you know, axioms that they think that consciousness should satisfy. And then there's some verbal
link |
00:19:15.440
discussion. And then at some point, phi appears. Right. Right. And this, this was what the first
link |
00:19:20.960
thing that really made the hair stand on my neck, to be honest, because they are acting as if there
link |
00:19:26.400
is a derivation. They're acting as if, you know, you're supposed to think that this is a derivation
link |
00:19:31.360
and there's nothing even remotely resembling a derivate. They just pull the phi out of a hat
link |
00:19:36.800
completely. Is one of the key criticisms to you is that details are missing or is there something
link |
00:19:41.200
more fundamental? That's not even the key criticism. That's just, that's just a side point.
link |
00:19:45.040
Okay. The, the core of it is that I think that the, you know, that they want to say that a system
link |
00:19:50.560
is more conscious the larger its value of phi. And I think that that is obvious nonsense. Okay. As
link |
00:19:57.040
soon as you think about it for like a minute, as soon as you think about it in terms of, could I
link |
00:20:02.160
construct a system that had an enormous value of phi, like, you know, even larger than the brain
link |
00:20:08.240
has, but that is just implementing an error correcting code, you know, doing nothing that we
link |
00:20:13.920
would associate with, you know, intelligence or consciousness or any of it. The answer is yes,
link |
00:20:20.080
it is easy to do that. Right. And so I wrote blog posts, just making this point that, yeah, it's
link |
00:20:25.360
easy to do that. Now, you know, Tinoni's response to that was actually kind of incredible, right?
link |
00:20:31.200
I mean, I, I, I admired it in a way because instead of disputing any of it, he just bit the
link |
00:20:36.800
bullet in the sense, you know, he was one of the, the, uh, the most, uh, uh, audacious bullet
link |
00:20:42.560
bitings I've ever seen in my career. Okay. He said, okay, then fine. You know, this system that
link |
00:20:49.520
just applies this error correcting code it's conscious, you know, and if it has a much larger
link |
00:20:54.400
value of phi than you or me, it's much more conscious than you and me. You know, you,
link |
00:20:59.600
we just have to accept what the theory says because, you know, science is not about confirming
link |
00:21:04.720
our intuitions. It's about challenging them. And, you know, this is what my theory predicts that
link |
00:21:10.080
this thing is conscious and, you know, or super duper conscious. And how are you going to prove
link |
00:21:15.040
me wrong? So the way I would argue against your blog posts is I would say, yes, sure. You're
link |
00:21:21.840
right in general, but for naturally arising systems developed through the process of evolution on
link |
00:21:28.320
earth, the, this rule of the larger fee being associated, being associated with more consciousness
link |
00:21:33.760
is correct. Yeah. So that's not what he said at all. Right. Right. Because he wants this to be
link |
00:21:38.640
completely general. So we can apply to even computers. Yeah. I mean, I mean, the, the whole
link |
00:21:43.040
interest of the theory is the, you know, the hope that it could be completely general apply to aliens,
link |
00:21:48.880
to computers, to animals, coma patients, to any of it. Right. And so, so, so he just said, well,
link |
00:21:59.040
you know, Scott is relying on his intuition, but, you know, I'm relying on this theory and,
link |
00:22:04.800
you know, to me it was almost like, you know, are we being serious here? Like, like, like,
link |
00:22:10.960
you know, like, like, okay, yes, in science we try to learn highly nonintuitive things,
link |
00:22:16.640
but what we do is we first test the theory on cases where we already know the answer. Right.
link |
00:22:22.880
Like if we, if someone had a new theory of temperature, right, then, you know, maybe we
link |
00:22:27.520
could check that it says that boiling water is hotter than ice. And then if it says that the sun
link |
00:22:33.440
is hotter than anything, you know, you've ever experienced, then maybe we, we trust that
link |
00:22:38.720
extrapolation. Right. But like this, this theory, like if, if, you know, it's now saying that, you
link |
00:22:46.320
know, a, a gigantic grit, like regular grid of exclusive or gates can be way more conscious than,
link |
00:22:53.680
you know, a person or than, than any animal can be, you know, even if it, you know, is, you know,
link |
00:22:59.680
is, is, is, is so uniform that it might as well just be a blank wall. Right. And, and so now the
link |
00:23:06.160
point is if, if this theory is sort of getting wrong, the question is a blank wall, you know,
link |
00:23:11.200
more conscious than a person, then I would say, what is, what is there for it to get right?
link |
00:23:15.920
So your, your sense is a blank wall is not more conscious than a human being.
link |
00:23:22.240
Yeah. I mean, I mean, I mean, you could say that I am taking that as one of my axioms.
link |
00:23:27.360
I'm saying, I'm saying that if, if a theory of consciousness is, is getting that wrong,
link |
00:23:33.440
then whatever it is talking about at that point, I, I, I'm not going to call it consciousness.
link |
00:23:39.760
I'm going to use a different word.
link |
00:23:40.720
You have to use a different word. I mean, it's also, it's possible just like with intelligence
link |
00:23:45.120
that us humans conveniently define these very difficult to understand concepts
link |
00:23:49.200
in a very human centric way. Just like the Turing test really seems to define intelligence as a
link |
00:23:55.200
thing that's human like. Right. But I would say that with any, uh, concept, you know, there's,
link |
00:24:01.040
uh, uh, uh, you know, like we, we, we, we first need to define it. Right. And a definition is
link |
00:24:07.440
only a good definition if it matches what we thought we were talking about prior to having
link |
00:24:12.640
a definition. Right. And I would say that, you know, uh, fee as a definition of consciousness
link |
00:24:19.120
fails that test. That is my argument. So, okay. So let's take a further step. So you mentioned
link |
00:24:26.160
that the universe might be a Turing machine. So like it might be computations or simulatable
link |
00:24:31.680
by one anyway, simulated by one. So what's your sense about consciousness? Do you think
link |
00:24:38.240
consciousness is computation that we don't need to go to any place outside of the computable universe
link |
00:24:46.080
to, uh, you know, to, to understand consciousness, to build consciousness, to measure consciousness,
link |
00:24:52.800
all those kinds of things? I don't know. These are what, uh, you know, have been called the,
link |
00:24:57.840
the vertiginous questions, right? There's the questions like, like, uh, you know,
link |
00:25:02.480
you get a feeling of vertigo and thinking about them. Right. I mean, I certainly feel like, uh,
link |
00:25:08.240
I am conscious in a way that is not reducible to computation, but why should you believe me?
link |
00:25:14.640
Right. I mean, and, and, and if you said the same to me, then why should I believe you?
link |
00:25:19.360
But as computer scientists, I feel like a computer could be, could achieve human level intelligence,
link |
00:25:27.680
but, and that's actually a feeling and a hope. That's not a scientific belief. It's just,
link |
00:25:33.520
we've built up enough intuition, the same kind of intuition you use in your blog.
link |
00:25:37.680
You know, that's what scientists do. They, I mean, some of it is a scientific method,
link |
00:25:41.040
but some of it is just damn good intuition. I don't have a good intuition about consciousness.
link |
00:25:45.840
Yeah. I'm not sure that anyone does or has in the, you know,
link |
00:25:49.840
2,500 years that these things have been discussed, Lex.
link |
00:25:53.360
But do you think we will? Like one of the, I've gotten a chance to attend,
link |
00:25:57.920
can't wait to hear your opinion on this, but attend the Neuralink event.
link |
00:26:01.920
And, uh, one of the dreams there is to, uh, you know, basically push neuroscience forward.
link |
00:26:07.360
And the hope with neuroscience is that, uh, we can inspect the machinery from which all this
link |
00:26:14.080
fun stuff emerges and see, we're going to notice something special, some special sauce from which
link |
00:26:19.920
something like consciousness or cognition emerges. Yeah. Well, it's clear that we've learned an
link |
00:26:24.560
enormous amount about neuroscience. We've learned an enormous amount about computation, you know,
link |
00:26:30.320
about machine learning, about AI, how to get it to work. We've learned, uh, an enormous amount about
link |
00:26:36.800
the underpinnings of the physical world, you know, and, you know, from one point of view,
link |
00:26:42.880
that's like, uh, an enormous distance that we've traveled along the road to understanding
link |
00:26:47.680
consciousness. From another point of view, you know, the distance still to be traveled on the
link |
00:26:52.000
road, you know, maybe seems no shorter than it was at the beginning. Right? So it's very hard to say.
link |
00:26:58.240
I mean, you know, these are questions like, like in, in, in sort of trying to have a theory
link |
00:27:03.120
of consciousness, there's sort of a problem where it feels like it's not just that we don't know
link |
00:27:08.000
how to make progress. It's that it's hard to specify what could even count as progress,
link |
00:27:13.280
right? Because no matter what scientific theory someone proposed, someone else could come along
link |
00:27:18.160
and say, well, you've just talked about the mechanism. You haven't said anything about
link |
00:27:22.560
what breathes fire into the mechanism, what really makes there something that it's like to be it.
link |
00:27:27.920
Right. And that seems like an objection that you could always raise no matter,
link |
00:27:32.000
you know, how much someone elucidated the details of how the brain works.
link |
00:27:35.840
Okay. Let's go to the Turing test and the Lobner Prize. I have this intuition, call me crazy,
link |
00:27:40.880
but we, that a machine to pass the Turing test and it's full, whatever the spirit of it is,
link |
00:27:48.080
we can talk about how to formulate the perfect Turing test, that that machine has to be conscious.
link |
00:27:55.680
We at least have to, I have a very low bar of what consciousness is. I tend to, I tend to think that
link |
00:28:03.280
the emulation of consciousness is as good as consciousness. So the consciousness is just a
link |
00:28:08.640
dance, a social, a social, a shortcut, like a nice, useful tool, but I tend to connect intelligence
link |
00:28:16.240
consciousness together. So by, by that, do you, maybe just to ask what, what role does consciousness
link |
00:28:25.840
play? Do you think it passed in the Turing test? Well, look, I mean, it's almost tautologically
link |
00:28:29.680
true that if we had a machine that passed the Turing test, then it would be emulating consciousness.
link |
00:28:35.120
Right? So if your position is that, you know, emulation of consciousness is consciousness,
link |
00:28:40.320
then so, you know, by, by definition, any machine that passed the Turing test would be conscious.
link |
00:28:45.840
But it's, but I mean, we know that you could say that, you know, that, that is just a way to
link |
00:28:50.480
rephrase the original question, you know, is an emulation of consciousness, you know, necessarily
link |
00:28:55.840
conscious. Right. And you can, can, you know, I hear, I'm not saying anything new that hasn't been
link |
00:29:01.120
debated ad nauseum in the literature. Okay. But, you know, you could imagine some very hard cases,
link |
00:29:07.360
like imagine a machine that passed the Turing test, but that did so just by an enormous
link |
00:29:13.360
cosmological sized lookup table that just cashed every possible conversation that could be had.
link |
00:29:19.840
The old Chinese room.
link |
00:29:21.040
Well, well, yeah, yeah. But, but this is, I mean, I mean, the Chinese room actually would be doing
link |
00:29:26.400
some computation, at least in Searle's version. Right. Here, I'm just talking about a table lookup.
link |
00:29:31.520
Okay. Now it's true that for conversations of a reasonable length, this, you know, lookup table
link |
00:29:37.040
would be so enormous that wouldn't even fit in the observable universe. Okay. But supposing that
link |
00:29:42.000
you could build a big enough lookup table and then just, you know, pass the Turing test just
link |
00:29:48.080
by looking up what the person said. Right. Are you going to regard that as conscious?
link |
00:29:52.960
Okay. Let me try to make this formal and then you can shut it down. I think that the emulation of
link |
00:30:00.880
something is that something, if there exists in that system, a black box that's full of mystery.
link |
00:30:07.840
So like, full of mystery to whom?
link |
00:30:11.440
To human specters.
link |
00:30:13.920
So does that mean that consciousness is relative to the observer? Like,
link |
00:30:17.120
could something be conscious for us, but not conscious for an alien that understood better
link |
00:30:22.160
what was happening inside the black box? Yes. So that if inside the black box is just a lookup
link |
00:30:27.360
table, the alien that saw that would say this is not conscious. To us, another way to phrase the
link |
00:30:33.680
black box is layers of abstraction, which make it very difficult to see to the actually underlying
link |
00:30:38.960
functionality of the system. And then we observe just the abstraction. And so it looks like magic
link |
00:30:44.400
to us. But once we understand the inner machinery, it stops being magic. And so like, that's a
link |
00:30:51.040
prerequisite is that you can't know how it works, or some part of it, because then there has to be
link |
00:30:57.040
in our human mind, entry point for the magic. So that's a formal definition of the system.
link |
00:31:05.440
Yeah, well, look, I mean, I explored a view in this essay I wrote called The Ghost in the Quantum
link |
00:31:10.960
Touring Machine seven years ago that is related to that, except that I did not want to have
link |
00:31:17.440
consciousness be relative to the observer, right? Because I think that if consciousness means
link |
00:31:22.400
anything, it is something that is experienced by the entity that is conscious, right? Like,
link |
00:31:27.840
I don't need you to tell me that I'm conscious, nor do you need me to tell you that you are,
link |
00:31:35.600
right? But basically, what I explored there is are there aspects of a system like a brain that just
link |
00:31:47.120
could not be predicted even with arbitrarily advanced future technologies? It's because of
link |
00:31:52.880
chaos combined with quantum mechanical uncertainty and things like that. I mean, that actually could
link |
00:31:59.120
be a property of the brain, you know, if true, that would distinguish it in a principled way,
link |
00:32:06.000
at least from any currently existing computer. Not from any possible computer, but yeah, yeah.
link |
00:32:11.360
This is a thought experiment. So if I gave you information that the entire history of your life,
link |
00:32:20.000
basically explain away free will with a lookup table, say that this was all predetermined,
link |
00:32:26.000
that everything you experienced has already been predetermined, wouldn't that take away
link |
00:32:29.840
your consciousness? Wouldn't you, yourself, wouldn't the experience of the world change for
link |
00:32:34.640
you in a way that you can't take back? Well, let me put it this way. If you could
link |
00:32:39.600
do like in a Greek tragedy where, you know, you would just write down a prediction for what I'm
link |
00:32:44.960
going to do and then maybe you put the prediction in a sealed box and maybe, you know, you open it
link |
00:32:52.160
later and you show that you knew everything I was going to do or, you know, of course,
link |
00:32:56.480
the even creepier version would be you tell me the prediction and then I try to falsify it,
link |
00:33:01.680
my very effort to falsify it makes it come true, right? Let's even forget that, you know,
link |
00:33:07.920
that version as convenient as it is for fiction writers, right? Let's just do the version where
link |
00:33:13.040
you put the prediction into a sealed envelope, okay? But if you could reliably predict everything
link |
00:33:19.440
that I was going to do, I'm not sure that that would destroy my sense of being conscious,
link |
00:33:24.320
but I think it really would destroy my sense of having free will, you know, and much, much more
link |
00:33:30.320
than any philosophical conversation could possibly do that, right? And so I think it becomes extremely
link |
00:33:37.760
interesting to ask, you know, could such predictions be done, you know, even in principle,
link |
00:33:43.280
is it consistent with the laws of physics to make such predictions, to get enough data about someone
link |
00:33:49.360
that you could actually generate such predictions without having to kill them in the process to,
link |
00:33:53.840
you know, slice their brain up into little slivers or something.
link |
00:33:57.280
I mean, it's theoretically possible, right?
link |
00:33:59.120
Well, I don't know. I mean, it might be possible, but only at the cost of destroying the person,
link |
00:34:04.320
right? I mean, it depends on how low you have to go in sort of the substrate. Like if there was
link |
00:34:11.040
a nice digital abstraction layer, if you could think of each neuron as a kind of transistor
link |
00:34:16.960
computing a digital function, then you could imagine some nanorobots that would go in and
link |
00:34:22.320
would just scan the state of each transistor, you know, of each neuron and then, you know, make a
link |
00:34:28.480
good enough copy, right? But if it was actually important to get down to the molecular or the
link |
00:34:34.240
atomic level, then, you know, eventually you would be up against quantum effects.
link |
00:34:38.720
You would be up against the unclonability of quantum states. So I think it's a question of
link |
00:34:43.760
how good of a replica, how good does the replica have to be before you're going to count it as
link |
00:34:49.760
actually a copy of you or as being able to predict your actions.
link |
00:34:54.240
That's a totally open question.
link |
00:34:55.760
Yeah, yeah, yeah. And especially once we say that, well, look, maybe there's no way to,
link |
00:35:02.080
you know, to make a deterministic prediction because, you know, we know that there's noise
link |
00:35:07.440
buffeting the brain around, presumably even quantum mechanical uncertainty,
link |
00:35:12.240
you know, affecting the sodium ion channels, for example, whether they open or they close.
link |
00:35:18.720
You know, there's no reason why over a certain time scale that shouldn't be amplified, just like
link |
00:35:24.880
we imagine happens with the weather or with any other, you know, chaotic system. So if that stuff
link |
00:35:33.680
is important, right, then we would say, well, you know, you can't, you know, you're never going to
link |
00:35:43.600
be able to make an accurate enough copy. But now the hard part is, well, what if someone can make
link |
00:35:48.000
a copy that sort of no one else can tell apart from you, right? It says the same kinds of things
link |
00:35:54.320
that you would have said, maybe not exactly the same things because we agree that there's noise,
link |
00:35:59.600
but it says the same kinds of things. And maybe you alone would say, no, I know that that's not
link |
00:36:04.960
me, you know, it's, it doesn't share my, I haven't felt my consciousness leap over to that other
link |
00:36:10.480
thing. I still feel it localized in this version, right? And then why should anyone else believe
link |
00:36:15.600
you? What are your thoughts? I'd be curious, you're a really good person to ask, which is
link |
00:36:20.720
Penrose's, Roger Penrose's work on consciousness, saying that there, you know, there is some,
link |
00:36:26.080
there's some, with axons and so on, there might be some biological places where quantum mechanics
link |
00:36:32.240
can come into play and through that create consciousness somehow.
link |
00:36:35.840
Yeah. Okay. Well, um, uh, of course, you know, I read Penrose's books as a teenager. They had
link |
00:36:42.480
a huge impact on me. Uh, uh, five or six years ago, I had the privilege to actually talk these
link |
00:36:47.840
things over with Penrose, you know, at some length at a conference in Minnesota. And, uh, you know,
link |
00:36:53.440
he is, uh, uh, you know, an amazing, uh, personality. I admire the fact that he was
link |
00:36:58.800
even raising such, uh, audacious questions at all. Uh, but you know, to, to, to answer your
link |
00:37:04.080
question, I think the first thing we need to get clear on is that he is not merely saying that
link |
00:37:09.680
quantum mechanics is relevant to consciousness, right? That would be like, um, you know, that would
link |
00:37:15.040
be tame compared to what he is saying, right? He is saying that, you know, even quantum mechanics
link |
00:37:20.880
is not good enough, right? If, because if, if supposing for example, that the brain were a
link |
00:37:25.280
quantum computer, I know that's still a computer, you know, in fact, a quantum computer can be
link |
00:37:30.640
simulated by an ordinary computer. It might merely need exponentially more time in order to do so,
link |
00:37:36.320
right? So that's simply not good enough for him. Okay. So what he wants is for the brain to be a
link |
00:37:42.400
quantum gravitational computer or, or, uh, uh, he wants the brain to be exploiting as yet unknown
link |
00:37:50.960
laws of quantum gravity. Okay. Which would, which would be uncomputable. That's the key point. Okay.
link |
00:37:57.200
Yes. Yes. That would be literally uncomputable. And I've asked him, you know, to clarify this,
link |
00:38:02.720
but uncomputable, even if you had an Oracle for the halting problem or, you know, and, and, or,
link |
00:38:09.680
you know, as high up as you want to go and the sort of high, the usual hierarchy of uncomputability,
link |
00:38:15.520
he wants to go beyond all of that. Okay. So, so, you know, just, just to be clear, like, you know,
link |
00:38:20.960
if we're keeping count of how many speculations, you know, there's probably like at least five or
link |
00:38:26.320
six of them, right? There's first of all, that there is some quantum gravity theory that would
link |
00:38:30.960
involve this kind of uncomputability, right? Most people who study quantum gravity would not agree
link |
00:38:36.480
with that. They would say that what we've learned, you know, what little we know about quantum
link |
00:38:41.360
gravity from the, this ADS CFT correspondence, for example, has been very much consistent with
link |
00:38:48.160
the broad idea of nature being computable, right? But, but all right, but, but supposing that he's
link |
00:38:55.600
right about that, then, you know, what most physicists would say is that whatever new
link |
00:39:01.920
phenomena there are in quantum gravity, you know, they might be relevant at the singularities of
link |
00:39:07.920
black holes. They might be relevant at the big bang. They are plainly not relevant to something
link |
00:39:15.600
like the brain, you know, that is operating at ordinary temperatures, you know, with ordinary
link |
00:39:21.920
chemistry and, you know, the, the, the physics underlying the brain, they, they would say that
link |
00:39:28.400
we have, you know, the fundamental physics of the brain, they would say that we've pretty much
link |
00:39:32.800
completely known for, for generations now, right? Because, you know, quantum field theory lets us
link |
00:39:39.440
sort of parameterize our ignorance, right? I mean, Sean Carroll has made this case and,
link |
00:39:44.720
you know, in great detail, right? That sort of whatever new effects are coming from quantum
link |
00:39:49.760
gravity, you know, they are sort of screened off by quantum field theory, right? And this is,
link |
00:39:55.120
this brings, you know, brings us to the whole idea of effective theories, right? But the,
link |
00:39:59.680
like we have, you know, the, in like in the standard model of elementary particles, right?
link |
00:40:04.480
We have a quantum field theory that seems totally adequate for all of the terrestrial phenomena,
link |
00:40:12.000
right? The only things that it doesn't, you know, explain are, well, first of all, you know,
link |
00:40:16.880
the details of gravity, if you were to probe it, like at, at, you know, extremes of, you know,
link |
00:40:23.440
curvature or like incredibly small distances, it doesn't explain dark matter. It doesn't explain
link |
00:40:29.760
black hole singularities, right? But these are all very exotic things, very, you know, far removed
link |
00:40:35.200
from our life on earth, right? So for Penrose to be right, he needs, you know, these phenomena to
link |
00:40:41.600
somehow affect the brain. He needs the brain to contain antennae that are sensitive to this as
link |
00:40:49.280
yet unknown physics, right? And then he needs a modification of quantum mechanics, okay? So he
link |
00:40:55.760
needs quantum mechanics to actually be wrong, okay? He needs, what he wants is what he calls
link |
00:41:02.560
an objective reduction mechanism or an objective collapse. So this is the idea that once quantum
link |
00:41:09.040
states get large enough, then they somehow spontaneously collapse, right? That, you know,
link |
00:41:17.680
and this is an idea that lots of people have explored. You know, there's something called the
link |
00:41:23.200
GRW proposal that tries to, you know, say something along those lines, you know, and these are
link |
00:41:29.200
theories that actually make testable predictions, right? Which is a nice feature that they have.
link |
00:41:34.320
But, you know, the very fact that they're testable may mean that in the, you know, in the coming
link |
00:41:39.360
decades, we may well be able to test these theories and show that they're wrong, right? You know, we
link |
00:41:45.200
may be able to test some of Penrose's ideas. If not, not his ideas about consciousness, but at
link |
00:41:50.800
least his ideas about an objective collapse of quantum states, right? And people have actually,
link |
00:41:56.560
like Dick Balmeister, have actually been working to try to do these experiments. They haven't been
link |
00:42:01.520
able to do it yet to test Penrose's proposal, okay? But Penrose would need more than just
link |
00:42:07.280
an objective collapse of quantum states, which would already be the biggest development in
link |
00:42:11.920
physics for a century since quantum mechanics itself, okay? He would need for consciousness
link |
00:42:18.080
to somehow be able to influence the direction of the collapse so that it wouldn't be completely
link |
00:42:24.240
random, but that, you know, your dispositions would somehow influence the quantum state
link |
00:42:29.440
to collapse more likely this way or that way, okay? Finally, Penrose, you know, says that all
link |
00:42:36.160
of this has to be true because of an argument that he makes based on Gödel's incompleteness theorem,
link |
00:42:42.320
okay? Now, like I would say the overwhelming majority of computer scientists and mathematicians
link |
00:42:49.040
who have thought about this, I don't think that Gödel's incompleteness theorem can do what he
link |
00:42:53.920
needs it to do here, right? I don't think that that argument is sound, okay? But that is, you know,
link |
00:43:00.000
that is sort of the tower that you have to ascend to if you're going to go where Penrose goes.
link |
00:43:04.560
And the intuition he uses with the incompleteness theorem is that basically
link |
00:43:09.440
that there's important stuff that's not computable? Is that where he takes it?
link |
00:43:13.360
It's not just that because, I mean, everyone agrees that there are problems that are uncomputable,
link |
00:43:18.000
right? That's a mathematical theorem, right? But what Penrose wants to say is that, you know,
link |
00:43:26.480
for example, there are statements, you know, given any formal system, you know, for doing math,
link |
00:43:33.920
right? There will be true statements of arithmetic that that formal system, you know,
link |
00:43:39.280
if it's adequate for math at all, if it's consistent and so on, will not be able to prove.
link |
00:43:44.640
A famous example being the statement that that system itself is consistent,
link |
00:43:49.600
right? No, you know, good formal system can actually prove its own consistency.
link |
00:43:55.040
That can only be done from a stronger formal system, which then can't prove its own consistency
link |
00:44:00.480
and so on forever, okay? That's Gödel's theorem. But now, why is that relevant to consciousness,
link |
00:44:08.400
right? Well, you know, I mean, the idea that it might have something to do with consciousness
link |
00:44:13.360
as an old one, Gödel himself apparently thought that it did. You know, Lucas thought so, I think,
link |
00:44:22.160
in the 60s. And Penrose is really just, you know, sort of updating what they and others had said.
link |
00:44:29.600
I mean, you know, the idea that Gödel's theorem could have something to do with consciousness was,
link |
00:44:34.000
you know, in 1950, when Alan Turing wrote his article about the Turing test, he already, you
link |
00:44:40.800
know, was writing about that as like an old and well known idea and as a wrong one that he wanted
link |
00:44:47.600
to dispense with. Okay, but the basic problem with this idea is, you know, Penrose wants to say
link |
00:44:54.400
that and all of his predecessors here, you know, want to say that, you know, even though, you know,
link |
00:45:00.480
this given formal system cannot prove its own consistency, we as humans sort of looking at it
link |
00:45:07.680
from the outside can just somehow see its consistency, right? And the, you know, the rejoinder
link |
00:45:15.280
to that, you know, from the very beginning has been, well, can we really? I mean, maybe, you
link |
00:45:21.120
know, maybe he, Penrose can, but, you know, can the rest of us, right? And, you know, I noticed
link |
00:45:28.560
that, you know, I mean, it is perfectly plausible to imagine a computer that could say, you know,
link |
00:45:36.560
it would not be limited to working within a single formal system, right? They could say,
link |
00:45:41.360
I am now going to adopt the hypothesis that my formal system is consistent, right? And I'm now
link |
00:45:47.760
going to see what can be done from that stronger vantage point and so on. And, you know, and I'm
link |
00:45:52.400
going to add new axioms to my system. Totally plausible. There's absolutely, Gödel's theorem
link |
00:45:58.640
has nothing to say about against an AI that could repeatedly add new axioms. All it says is that
link |
00:46:05.440
there is no absolute guarantee that when the AI adds new axioms that it will always be right.
link |
00:46:12.320
Okay. And, you know, and that's, of course, the point that Penrose pounces on,
link |
00:46:15.600
but the reply is obvious. And, you know, it's one that Alan Turing made 70 years ago. Namely,
link |
00:46:21.040
we don't have an absolute guarantee that we're right when we add a new axiom. We never have,
link |
00:46:26.400
and plausibly we never will. So on Alan Turing, you took part in the Lubna Prize?
link |
00:46:32.880
Not really. No, I didn't. I mean, there was this kind of ridiculous claim that was made
link |
00:46:39.600
some almost a decade ago about a chat bot called Eugene Goostman.
link |
00:46:46.080
I guess you didn't participate as a judge in the Lubna Prize.
link |
00:46:48.640
I didn't.
link |
00:46:49.040
But you participated as a judge in that, I guess it was an exhibition event or something like that,
link |
00:46:54.160
or with Eugene...
link |
00:46:56.400
Eugene Goostman, that was just me writing a blog post because some journalist called me to ask
link |
00:47:01.280
about it.
link |
00:47:01.680
Did you ever chat with him? I thought that...
link |
00:47:03.200
I did chat with Eugene Goostman. I mean, it was available on the web.
link |
00:47:06.320
Oh, interesting. I didn't know that.
link |
00:47:07.600
So yeah. So all that happened was that a bunch of journalists started writing breathless articles
link |
00:47:14.400
about a first chat bot that passes the Turing test. And it was this thing called Eugene Goostman
link |
00:47:21.440
that was supposed to simulate a 13 year old boy. And apparently someone had done some test where
link |
00:47:29.920
people were less than perfect, let's say, distinguishing it from a human. And they said,
link |
00:47:36.080
well, if you look at Turing's paper and you look at the percentages that he talked about,
link |
00:47:42.320
then it seemed like we're past that threshold.
link |
00:47:45.600
And I had a different way to look at it instead of the legalistic way, like let's just try the
link |
00:47:53.520
actual thing out and let's see what it can do with questions like, is Mount Everest bigger
link |
00:47:59.760
than a shoebox? Or just like the most obvious questions. And the answer is, well, it just kind
link |
00:48:08.160
of parries you because it doesn't know what you're talking about.
link |
00:48:10.720
So just to clarify exactly in which way they're obvious. They're obvious in the sense that
link |
00:48:17.360
you convert the sentences into the meaning of the objects they represent and then do some basic
link |
00:48:23.600
obvious common sense reasoning with the objects that the sentences represent.
link |
00:48:29.120
Right. It was not able to answer or even intelligently respond to basic common sense
link |
00:48:35.040
questions. But let me say something stronger than that. There was a famous chatbot in the 60s
link |
00:48:39.920
called Eliza that managed to actually fool a lot of people. Or people would pour their hearts out
link |
00:48:48.160
into this Eliza because it simulated a therapist. And most of what it would do is it would just
link |
00:48:54.000
throw back at you whatever you said. And this turned out to be incredibly effective.
link |
00:49:00.720
Maybe therapists know this. This is one of their tricks. But it really had some people convinced.
link |
00:49:10.880
But this thing was just like, I think it was literally just a few hundred lines of Lisp code.
link |
00:49:17.120
It was not only was it not intelligent, it wasn't especially sophisticated. It was
link |
00:49:22.480
like a simple little hobbyist program. And Eugene Goostman, from what I could see,
link |
00:49:27.840
was not a significant advance compared to Eliza. And that was really the point I was making.
link |
00:49:38.560
In some sense, you didn't need a computer science professor to sort of say this. Anyone who was
link |
00:49:45.520
looking at it and who just had an ounce of sense could have said the same thing.
link |
00:49:50.560
But because these journalists were calling me, the first thing I said was,
link |
00:49:58.320
well, I'm a quantum computing person. I'm not an AI person. You shouldn't ask me. Then they said,
link |
00:50:04.640
look, you can go here and you can try it out. I said, all right. All right. So I'll try it out.
link |
00:50:10.800
This whole discussion, it got a whole lot more interesting in just the last few months.
link |
00:50:15.600
Yeah. I'd love to hear your thoughts about GPT3. In the last few months, the world has now seen
link |
00:50:24.400
a chat engine or a text engine, I should say, called GPT3. I think it still does not pass
link |
00:50:33.920
a Turing test. There are no real claims that it passes the Turing test. This comes out of the
link |
00:50:40.880
group at OpenAI, and they've been relatively careful in what they've claimed about the system.
link |
00:50:47.280
But I think as clearly as Eugene Goostman was not in advance over Eliza, it is equally clear that
link |
00:50:56.960
this is a major advance over Eliza or really over anything that the world has seen before.
link |
00:51:03.040
This is a text engine that can come up with kind of on topic, reasonable sounding completions to
link |
00:51:12.480
just about anything that you ask. You can ask it to write a poem about topic X in the style of poet
link |
00:51:20.240
Y and it will have a go at that. And it will do not a great job, not an amazing job, but a passable
link |
00:51:29.040
job. Definitely as good as, in many cases, I would say better than I would have done.
link |
00:51:37.600
You can ask it to write an essay, like a student essay, about pretty much any topic and it will
link |
00:51:43.760
get something that I am pretty sure would get at least a B minus in the most high school or
link |
00:51:50.080
even college classes. And in some sense, the way that it did this, the way that it achieves this,
link |
00:51:56.320
Scott Alexander of the much mourned blog, Slate Star Codex, had a wonderful way of putting it.
link |
00:52:03.760
He said that they basically just ground up the entire internet into a slurry.
link |
00:52:10.400
And to tell you the truth, I had wondered for a while why nobody had tried that. Why not write
link |
00:52:16.640
a chat bot by just doing deep learning over a corpus consisting of the entire web? And so
link |
00:52:24.880
now they finally have done that. And the results are very impressive. It's not clear that people
link |
00:52:35.280
can argue about whether this is truly a step toward general AI or not, but this is an amazing
link |
00:52:41.440
capability that we didn't have a few years ago. A few years ago, if you had told me that we would
link |
00:52:50.720
have it now, that would have surprised me. And I think that anyone who denies that is just not
link |
00:52:55.840
engaging with what's there. So their model, it takes a large part of the internet and compresses
link |
00:53:02.720
it in a small number of parameters relative to the size of the internet and is able to, without
link |
00:53:10.480
fine tuning, do a basic kind of a querying mechanism, just like you described where you
link |
00:53:16.880
specify a kind of poet and then you want to write a poem. And it somehow is able to do basically a
link |
00:53:21.520
lookup on the internet of relevant things. How else do you explain it?
link |
00:53:27.440
Well, okay. The training involved massive amounts of data from the internet and actually took
link |
00:53:34.080
lots and lots of computer power, lots of electricity. There are some very prosaic
link |
00:53:40.000
reasons why this wasn't done earlier. But it costs some tens of millions of dollars, I think.
link |
00:53:46.720
Less, but approximately like a few million dollars.
link |
00:53:49.440
Oh, okay. Oh, really? Okay.
link |
00:53:51.360
It's more like four or five.
link |
00:53:53.600
Oh, all right. All right. Thank you. I mean, as they scale it up, it will...
link |
00:53:57.440
It'll cost, but then the hope is cost comes down and all that kind of stuff.
link |
00:54:02.320
But basically, it is a neural net or what's now called a deep net,
link |
00:54:09.040
but they're basically the same thing. So it's a form of algorithm that people
link |
00:54:15.520
have known about for decades. But it is constantly trying to solve the problem,
link |
00:54:21.920
predict the next word. So it's just trying to predict what comes next. It's not trying to
link |
00:54:30.080
decide what it should say, what ought to be true. It's trying to predict what someone who had said
link |
00:54:37.120
all of the words up to the preceding one would say next.
link |
00:54:40.720
Although to push back on that, that's how it's trained.
link |
00:54:43.440
That's right. No, of course.
link |
00:54:45.280
It's arguable that our very cognition could be a mechanism as that simple.
link |
00:54:50.480
Oh, of course. Of course. I never said that it wasn't.
link |
00:54:52.960
Right. But...
link |
00:54:54.960
Yeah. I mean, and sometimes that is... If there is a deep philosophical question that's
link |
00:55:00.400
raised by GPT3, then that is it, right? Are we doing anything other than this predictive
link |
00:55:06.320
processing, just trying to constantly trying to fill in a blank of what would come next
link |
00:55:12.000
after what we just said up to this point? Is that what I'm doing right now?
link |
00:55:16.560
It's impossible. So the intuition that a lot of people have, well, look,
link |
00:55:20.480
this thing is not going to be able to reason, the Mountain Everest question.
link |
00:55:24.800
Do you think it's possible that GPT5, 6, and 7 would be able to, with this exact same process,
link |
00:55:31.600
begin to do something that looks like... Is indistinguishable to us humans from reasoning?
link |
00:55:38.720
I mean, the truth is that we don't really know what the limits are, right?
link |
00:55:42.960
Right. Exactly.
link |
00:55:44.000
Because what we've seen so far is that GPT3 was basically the same thing as GPT2,
link |
00:55:51.120
but just with a much larger network, more training time, bigger training corpus,
link |
00:55:59.360
right? And it was very noticeably better than its immediate predecessor.
link |
00:56:05.680
So we don't know where you hit the ceiling here, right? I mean, that's the amazing part and maybe
link |
00:56:12.320
also the scary part, right? Now, my guess would be that at some point, there has to be diminishing
link |
00:56:19.840
returns. It can't be that simple, can it? Right? But I wish that I had more to base that guess on.
link |
00:56:27.520
Right. Yeah. I mean, some people say that there will be a limitation on the...
link |
00:56:31.360
We're going to hit a limit on the amount of data that's on the internet.
link |
00:56:34.640
Yes. Yeah. So sure. So there's certainly that limit. I mean, there's also...
link |
00:56:41.600
If you are looking for questions that will stump GPT3, you can come up with some without...
link |
00:56:48.320
Even getting it to learn how to balance parentheses, right? It doesn't do such a great job,
link |
00:56:55.680
right? And its failures are ironic, right? Like basic arithmetic, right?
link |
00:57:04.000
And you think, isn't that what computers are supposed to be best at? Isn't that where
link |
00:57:08.560
computers already had us beat a century ago? Right? And yet that's where GPT3 struggles,
link |
00:57:14.880
right? But it's amazing that it's almost like a young child in that way, right? But somehow,
link |
00:57:23.840
because it is just trying to predict what comes next, it doesn't know when it should stop doing
link |
00:57:30.640
that and start doing something very different, like some more exact logical reasoning, right?
link |
00:57:36.240
And so one is naturally led to guess that our brain sort of has some element of predictive
link |
00:57:45.920
processing, but that it's coupled to other mechanisms, right? That it's coupled to,
link |
00:57:50.800
first of all, visual reasoning, which GPT3 also doesn't have any of, right?
link |
00:57:55.120
Although there's some demonstration that there's a lot of promise there using...
link |
00:57:58.560
Oh yeah, it can complete images. That's right.
link |
00:58:00.880
And using exact same kind of transformer mechanisms to like watch videos on YouTube.
link |
00:58:06.160
And so the same self supervised mechanism to be able to look,
link |
00:58:11.280
it'd be fascinating to think what kind of completions you could do.
link |
00:58:14.240
Oh yeah, no, absolutely. Although like if we ask it to like, you know,
link |
00:58:17.840
a word problem that involve reasoning about the locations of things in space,
link |
00:58:22.400
I don't think it does such a great job on those, right? To take an example. And so
link |
00:58:26.160
the guess would be, well, you know, humans have a lot of predictive processing,
link |
00:58:31.120
a lot of just filling in the blanks, but we also have these other mechanisms that we can
link |
00:58:35.360
couple to, or that we can sort of call as subroutines when we need to.
link |
00:58:39.680
And that maybe, you know, to go further, that one would want to integrate other forms of reasoning.
link |
00:58:46.800
Let me go on another topic that is amazing, which is complexity.
link |
00:58:52.240
And then start with the most absurdly romantic question of what's the most beautiful idea in
link |
00:59:00.640
computer science or theoretical computer science to you? Like what just early on in your life,
link |
00:59:05.760
or in general, have captivated you and just grabbed you?
link |
00:59:08.560
I think I'm going to have to go with the idea of universality. You know,
link |
00:59:13.280
if you're really asking for the most beautiful. I mean, so universality is the idea that, you know,
link |
00:59:20.160
you put together a few simple operations, like in the case of Boolean logic, that might be the AND
link |
00:59:27.680
gate, the OR gate, the NOT gate, right? And then your first guess is, okay, this is a good start,
link |
00:59:33.520
but obviously, as I want to do more complicated things, I'm going to need more complicated building
link |
00:59:38.960
blocks to express that, right? And that was actually my guess when I first learned what
link |
00:59:44.080
programming was. I mean, when I was, you know, an adolescent and someone showed me Apple basic,
link |
00:59:50.800
and then, you know, GW basic, if anyone listening remembers that. Okay. But, you know,
link |
00:59:57.920
I thought, okay, well, now, you know, I mean, I thought I felt like this is a revelation. You know,
link |
01:00:03.760
it's like finding out where babies come from. It's like that level of, you know, why didn't
link |
01:00:08.000
anyone tell me this before, right? But I thought, okay, this is just the beginning. Now I know how
link |
01:00:12.800
to write a basic program, but, you know, really write an interesting program, like, you know,
link |
01:00:18.640
a video game, which had always been my dream as a kid to, you know, create my own Nintendo games,
link |
01:00:24.400
right? You know, but, you know, obviously I'm going to need to learn some way more complicated
link |
01:00:29.360
form of programming than that. Okay. But, you know, eventually I learned this incredible idea
link |
01:00:35.440
of universality. And that says that, no, you throw in a few rules and then you already have
link |
01:00:42.400
enough to express everything. Okay. So for example, the AND, the OR and the NOT gate can all,
link |
01:00:48.960
or in fact, even just the AND and the NOT gate, or even just the NAND gate, for example,
link |
01:00:55.040
is already enough to express any Boolean function on any number of bits. You just have to string
link |
01:01:00.480
together enough of them. You can build a universe with NAND gates. You can build the universe out of
link |
01:01:04.800
NAND gates. Yeah. You know, the simple instructions of BASIC are already enough, at least in principle,
link |
01:01:12.640
you know, if we ignore details like how much memory can be accessed and stuff like that,
link |
01:01:17.840
that is enough to express what could be expressed by any programming language whatsoever.
link |
01:01:22.800
And the way to prove that is very simple. We simply need to show that in BASIC or whatever,
link |
01:01:28.240
we could write an interpreter or a compiler for whatever other programming language we care about,
link |
01:01:35.040
like C or Java or whatever. And as soon as we had done that, then ipso facto, anything that's
link |
01:01:41.360
expressible in C or Java is also expressible in BASIC. Okay. And so this idea of universality,
link |
01:01:49.520
you know, goes back at least to Alan Turing in the 1930s when, you know, he
link |
01:01:54.720
wrote down this incredibly simple pared down model of a computer, the Turing machine, right,
link |
01:02:01.040
which, you know, he pared down the instruction set to just read a symbol, you know, write a symbol,
link |
01:02:08.800
move to the left, move to the right, halt, change your internal state, right? That's it. Okay.
link |
01:02:15.440
And anybody proved that, you know, this could simulate all kinds of other things, you know,
link |
01:02:22.160
and so in fact, today we would say, well, we would call it a Turing universal model of computation
link |
01:02:28.560
that is, you know, just as it has just the same expressive power that BASIC or Java or C++ or any
link |
01:02:37.680
of those other languages have because anything in those other languages could be compiled down
link |
01:02:43.600
to Turing machine. Now, Turing also proved a different related thing, which is that there is
link |
01:02:48.880
a single Turing machine that can simulate any other Turing machine if you just describe that
link |
01:02:57.360
other machine on its tape, right? And likewise, there is a single Turing machine that will run
link |
01:03:03.120
any C program, you know, if you just put it on its tape. That's a second meaning of universality.
link |
01:03:08.960
First of all, he couldn't visualize it and that was in the 30s.
link |
01:03:12.320
Yeah, the 30s. That's right.
link |
01:03:13.600
That's before computers really, I mean, I don't know how, I wonder what that felt like,
link |
01:03:21.120
you know, learning that there's no Santa Claus or something. Because I don't know if that's
link |
01:03:27.760
empowering or paralyzing because it doesn't give you any, it's like you can't write a software
link |
01:03:34.800
engineering book and make that the first chapter and say we're done.
link |
01:03:38.320
Well, I mean, right. I mean, in one sense, it was this enormous flattening of the universe.
link |
01:03:44.320
Yes.
link |
01:03:44.800
I had imagined that there was going to be some infinite hierarchy of more and more powerful
link |
01:03:50.320
programming languages, you know, and then I kicked myself for having such a stupid idea.
link |
01:03:55.440
But apparently, Gödel had had the same conjecture in the 30s.
link |
01:03:58.800
Oh, good. You're in good company.
link |
01:04:00.880
Yeah. And then Gödel read Turing's paper and he kicked himself and he said, yeah, I was completely
link |
01:04:10.000
wrong about that. But I had thought that maybe where I can contribute will be to invent a new
link |
01:04:17.760
more powerful programming language that lets you express things that could never be expressed in
link |
01:04:22.800
BASIC. And how would you do that? Obviously, you couldn't do it itself in BASIC. But there
link |
01:04:30.640
is this incredible flattening that happens once you learn what is universality. But then it's also
link |
01:04:39.200
an opportunity because it means once you know these rules, then the sky is the limit, right?
link |
01:04:44.720
Then you have kind of the same weapons at your disposal that the world's greatest programmer has.
link |
01:04:51.440
It's now all just a question of how you wield them.
link |
01:04:54.240
Right. Exactly. So every problem is solvable, but some problems are harder than others.
link |
01:05:00.960
Well, yeah, there's the question of how much time, you know, of how hard is it to write a program?
link |
01:05:06.960
And then there's also the questions of what resources does the program need? You know,
link |
01:05:11.280
how much time, how much memory? Those are much more complicated questions. Of course,
link |
01:05:15.360
ones that we're still struggling with today.
link |
01:05:17.360
Exactly. So you've, I don't know if you created Complexity Zoo or...
link |
01:05:21.200
I did create the Complexity Zoo.
link |
01:05:23.120
What is it? What's complexity?
link |
01:05:24.880
Oh, all right, all right, all right. Complexity theory is the study of sort of the
link |
01:05:29.920
inherent resources needed to solve computational problems, okay? So it's easiest to give an example.
link |
01:05:38.560
Like, let's say we want to add two numbers, right? If I want to add them, you know, if the numbers
link |
01:05:47.040
are twice as long, then it only, it will take me twice as long to add them, but only twice as long,
link |
01:05:52.480
right? It's no worse than that.
link |
01:05:54.480
Or a computer.
link |
01:05:55.440
For a computer or for a person. We're using pencil and paper, for that matter.
link |
01:05:59.120
If you have a good algorithm.
link |
01:06:00.400
Yeah, that's right. I mean, even if you just use the elementary school algorithm of just carrying,
link |
01:06:05.440
you know, then it takes time that is linear in the length of the numbers, right? Now,
link |
01:06:10.640
multiplication, if you use the elementary school algorithm, is harder because you have to multiply
link |
01:06:17.040
each digit of the first number by each digit of the second one. And then deal with all the
link |
01:06:22.000
carries. So that's what we call a quadratic time algorithm, right? If the numbers become twice as
link |
01:06:28.800
long, now you need four times as much time, okay? So now, as it turns out, people discovered much
link |
01:06:38.000
faster ways to multiply numbers using computers. And today we know how to multiply two numbers
link |
01:06:45.040
that are n digits long using a number of steps that's nearly linear in n. These are questions you
link |
01:06:50.960
can ask. But now, let's think about a different thing that people, you know, they've encountered
link |
01:06:56.160
in elementary school, factoring a number. Okay? Take a number and find its prime factors, right?
link |
01:07:03.040
And here, you know, if I give you a number with ten digits, I ask you for its prime factors.
link |
01:07:08.640
Well, maybe it's even, so you know that two is a factor. You know, maybe it ends in zero,
link |
01:07:13.600
so you know that ten is a factor, right? But, you know, other than a few obvious things like that,
link |
01:07:18.880
you know, if the prime factors are all very large, then it's not clear how you even get started,
link |
01:07:24.320
right? You know, it seems like you have to do an exhaustive search among an enormous number of
link |
01:07:29.360
factors. Now, and as many people might know, for better or worse, the security, you know,
link |
01:07:39.280
of most of the encryption that we currently use to protect the internet is based on the belief,
link |
01:07:45.200
and this is not a theorem, it's a belief, that factoring is an inherently hard problem
link |
01:07:52.000
for our computers. We do know algorithms that are better than just trial division, than just trying
link |
01:07:58.000
all the possible divisors, but they are still basically exponential. And exponential is hard.
link |
01:08:05.840
Yeah, exactly. So the fastest algorithms that anyone has discovered, at least publicly
link |
01:08:11.520
discovered, you know, I'm assuming that the NSA doesn't know something better,
link |
01:08:15.520
okay? But they take time that basically grows exponentially with the cube root of the size of
link |
01:08:21.920
the number that you're factoring, right? So that cube root, that's the part that takes all the
link |
01:08:26.800
cleverness, okay? But there's still an exponential. There's still an exponentiality there. But what
link |
01:08:31.600
that means is that, like, when people use a thousand bit keys for their cryptography,
link |
01:08:37.360
that can probably be broken using the resources of the NSA or the world's other intelligence
link |
01:08:42.800
agencies. You know, people have done analyses that say, you know, with a few hundred million
link |
01:08:47.600
dollars of computer power, they could totally do this. And if you look at the documents that Snowden
link |
01:08:53.120
released, you know, it looks a lot like they are doing that or something like that. It would kind
link |
01:08:59.360
of be surprising if they weren't, okay? But, you know, if that's true, then in some ways that's
link |
01:09:05.520
reassuring. Because if that's the best that they can do, then that would say that they can't break
link |
01:09:10.000
2,000 bit numbers, right? Then 2,000 bit numbers would be beyond what even they could do.
link |
01:09:16.960
They haven't found an efficient algorithm. That's where all the worries and the concerns of quantum
link |
01:09:21.600
computing came in, that there could be some kind of shortcut around that.
link |
01:09:24.400
Right. So complexity theory is a huge part of, let's say, the theoretical core of computer
link |
01:09:31.920
science. You know, it started in the 60s and 70s as, you know, sort of an autonomous field. So it
link |
01:09:39.280
was, you know, already, you know, I mean, you know, it was well developed even by the time that
link |
01:09:45.120
I was born, okay? But in 2002, I made a website called the Complexity Zoo, to answer your question,
link |
01:09:54.880
where I just tried to catalog the different complexity classes, which are classes of problems
link |
01:10:01.360
that are solvable with different kinds of resources, okay? So these are kind of, you know,
link |
01:10:06.960
you could think of complexity classes as like being almost to theoretical computer science,
link |
01:10:13.200
like what the elements are to chemistry, right? They're sort of, you know, there are our most
link |
01:10:18.320
basic objects in a certain way. I feel like the elements
link |
01:10:25.120
have a characteristic to them where you can't just add an infinite number.
link |
01:10:29.200
Well, you could, but beyond a certain point, they become unstable, right? Right. So it's like,
link |
01:10:34.960
you know, in theory, you can have atoms with, you know, and look, look, I mean, I mean,
link |
01:10:39.040
a neutron star, you know, is a nucleus with, you know, uncalled billions of neutrons in it,
link |
01:10:48.880
of hadrons in it, okay? But, you know, for sort of normal atoms, right, probably you can't get
link |
01:10:56.400
much above a hundred atomic weight, 150 or so, or sorry, sorry, I mean, beyond 150 or so protons
link |
01:11:04.320
without it, you know, very quickly fissioning. With complexity classes, well, yeah, you can have
link |
01:11:10.240
an infinity of complexity classes, but, you know, maybe there's only a finite number of them that
link |
01:11:16.080
are particularly interesting, right? Just like with anything else, you know, you care about
link |
01:11:21.680
some more than about others. So what kind of interesting classes are there? I mean,
link |
01:11:25.920
you could have just, maybe say, what are the, if you take any kind of computer science class,
link |
01:11:31.040
what are the classes you learn? Good. Let me tell you sort of the biggest ones,
link |
01:11:36.400
the ones that you would learn first. So, you know, first of all, there is P, that's what it's called,
link |
01:11:41.840
okay? It stands for polynomial time. And this is just the class of all of the problems that you
link |
01:11:47.840
could solve with a conventional computer, like your iPhone or your laptop, you know,
link |
01:11:54.240
by a completely deterministic algorithm, right? Using a number of steps that grows only like the
link |
01:12:01.680
size of the input raised to some fixed power, okay? So, if your algorithm is linear time,
link |
01:12:09.280
like, you know, for adding numbers, okay, that problem is in P. If you have an algorithm that's
link |
01:12:14.800
quadratic time, like the elementary school algorithm for multiplying two numbers, that's also
link |
01:12:20.480
in P, even if it was the size of the input to the 10th power or to the 50th power, well, that wouldn't
link |
01:12:26.800
be very good in practice. But, you know, formally, we would still count that, that would still be in
link |
01:12:32.160
P, okay? But if your algorithm takes exponential time, meaning like if every time I add one more
link |
01:12:41.520
data point to your input, if the time needed by the algorithm doubles, if you need time like two
link |
01:12:48.560
to the power of the amount of input data, then that we call an exponential time algorithm, okay?
link |
01:12:56.320
And that is not polynomial, okay? So, P is all of the problems that have some polynomial time
link |
01:13:03.120
algorithm, okay? So, that includes most of what we do with our computers on a day to day basis,
link |
01:13:09.040
you know, all the, you know, sorting, basic arithmetic, you know, whatever is going on in
link |
01:13:14.320
your email reader or in Angry Birds, okay? It's all in P. Then the next super important class
link |
01:13:21.840
is called NP. That stands for non deterministic polynomial, okay? It does not stand for not
link |
01:13:28.400
polynomial, which is a common confusion. But NP was basically all of the problems
link |
01:13:35.440
where if there is a solution, then it is easy to check the solution if someone shows it to you,
link |
01:13:41.920
okay? So, actually a perfect example of a problem in NP is factoring, the one I told you about
link |
01:13:48.880
before. Like if I gave you a number with thousands of digits and I told you that, you know, I asked
link |
01:13:56.240
you, does this have at least three non trivial divisors, right? That might be a super hard problem
link |
01:14:05.120
to solve, right? It might take you millions of years using any algorithm that's known, at least
link |
01:14:09.920
running on our existing computers, okay? But if I simply showed you the divisors, I said,
link |
01:14:16.000
here are three divisors of this number, then it would be very easy for you to ask your computer
link |
01:14:22.080
to just check each one and see if it works. Just divide it in, see if there's any remainder,
link |
01:14:27.520
right? And if they all go in, then you've checked. Well, I guess there were, right? So any problem
link |
01:14:35.040
where, you know, wherever there's a solution, there is a short witness that can be easily,
link |
01:14:40.480
like a polynomial size witness that can be checked in polynomial time, that we call an NP problem,
link |
01:14:48.000
okay? And yeah, so every problem that's in P is also in NP, right? Because, you know, you could
link |
01:14:55.440
always just ignore the witness and just, you know, if a problem is in P, you can just solve it
link |
01:14:59.520
yourself, okay? But now, in some sense, the central, you know, mystery of theoretical computer science
link |
01:15:07.200
is every NP problem in P. So if you can easily check the answer to a computational problem,
link |
01:15:15.200
does that mean that you can also easily find the answer?
link |
01:15:18.080
Even though there's all these problems that appear to be very difficult to find the answer,
link |
01:15:23.600
it's still an open question whether a good answer exists.
link |
01:15:26.880
Because no one has proven that there's no way to do it.
link |
01:15:29.680
It's arguably the most, I don't know, the most famous, the most maybe interesting,
link |
01:15:36.560
maybe you disagree with that, problem in theoretical computer science. So what's your
link |
01:15:40.000
The most famous, for sure.
link |
01:15:41.280
P equals NP. If you were to bet all your money, where do you put your money?
link |
01:15:45.280
That's an easy one. P is not equal to NP. I like to say that if we were physicists,
link |
01:15:49.840
we would have just declared that to be a law of nature, you know, just like thermodynamics.
link |
01:15:54.560
That's hilarious.
link |
01:15:55.680
Given ourselves Nobel Prizes for its discovery. Yeah, you know, and look, if later it turned out
link |
01:16:01.280
that we were wrong, we just give ourselves more Nobel Prizes.
link |
01:16:04.560
So harsh, but so true.
link |
01:16:09.280
I mean, no, I mean, I mean, it's really just because we are mathematicians or descended
link |
01:16:14.720
from mathematicians, you know, we have to call things conjectures that other people
link |
01:16:19.760
would just call empirical facts or discoveries, right?
link |
01:16:23.280
But one shouldn't read more into that difference in language, you know,
link |
01:16:26.960
about the underlying truth.
link |
01:16:28.800
So, okay, so you're a good investor and good spender of money. So then let me ask another
link |
01:16:33.760
way. Is it possible at all? And what would that look like if P indeed equals NP?
link |
01:16:41.680
Well, I do think that it's possible. I mean, in fact, you know, when people really pressed
link |
01:16:45.360
me on my blog for what odds would I put, I put, you know, two or three percent odds.
link |
01:16:50.320
Wow, that's pretty good.
link |
01:16:51.200
That P equals NP. Yeah. Well, because, you know, when P, I mean, you really have to think
link |
01:16:57.200
about, like, if there were 50, you know, mysteries like P versus NP, and if I made a guess about
link |
01:17:04.160
every single one of them, would I expect to be right 50 times? Right? And the truthful
link |
01:17:09.040
answer is no. Okay.
link |
01:17:10.560
Yeah.
link |
01:17:11.040
So, you know, and that's what you really mean in saying that, you know, you have, you know,
link |
01:17:16.560
better than 98% odds for something. Okay. But so, yeah, you know, I mean, there could
link |
01:17:22.640
certainly be surprises. And look, if P equals NP, well, then there would be the further
link |
01:17:27.920
question of, you know, is the algorithm actually efficient in practice? Right? I mean, Don
link |
01:17:33.920
Knuth, who I know that you've interviewed as well, right, he likes to conjecture that
link |
01:17:39.440
P equals NP, but that the algorithm is so inefficient that it doesn't matter anyway.
link |
01:17:44.720
Right?
link |
01:17:45.200
No, I don't know. I've listened to him say that. I don't know whether he says that just
link |
01:17:50.160
because he has an actual reason for thinking it's true or just because it sounds cool.
link |
01:17:54.400
Yeah.
link |
01:17:54.640
Okay. But, you know, that's a logical possibility, right, that the algorithm could be n to the
link |
01:18:00.960
10,000 time, or it could even just be n squared time, but with a leading constant of, it could
link |
01:18:06.880
be a Google times n squared or something like that. And in that case, the fact that P equals
link |
01:18:12.080
NP, well, it would ravage the whole theory of complexity. We would have to rebuild from
link |
01:18:19.840
the ground up. But in practical terms, it might mean very little, right, if the algorithm
link |
01:18:25.680
was too inefficient to run. If the algorithm could actually be run in practice, like if
link |
01:18:31.680
it had small enough constants, or if you could improve it to where it had small enough constants
link |
01:18:38.000
that was efficient in practice, then that would change the world. Okay?
link |
01:18:42.400
You think it would have, like, what kind of impact would it have?
link |
01:18:44.320
Well, okay, I mean, here's an example. I mean, you could, well, okay, just for starters,
link |
01:18:49.600
you could break basically all of the encryption that people use to protect the internet.
link |
01:18:53.600
That's just for starters.
link |
01:18:54.480
You could break Bitcoin and every other cryptocurrency, or, you know,
link |
01:18:58.800
mine as much Bitcoin as you wanted, right? You know, become a super duper billionaire,
link |
01:19:06.480
right? And then plot your next move.
link |
01:19:09.040
Right. That's just for starters. That's a good point.
link |
01:19:11.280
Now, your next move might be something like, you know, you now have, like, a theoretically
link |
01:19:16.960
optimal way to train any neural network, to find parameters for any neural network, right?
link |
01:19:22.240
So you could now say, like, is there any small neural network that generates the entire content
link |
01:19:27.840
of Wikipedia, right? If, you know, and now the question is not, can you find it? The
link |
01:19:33.280
question has been reduced to, does that exist or not? If it does exist, then the answer would be,
link |
01:19:39.120
yes, you can find it, okay? If you had this algorithm in your hands, okay?
link |
01:19:44.400
You could ask your computer, you know, I mean, P versus NP is one of these seven problems that
link |
01:19:50.000
carries this million dollar prize from the Clay Foundation. You know, if you solve it,
link |
01:19:54.880
you know, and others are the Riemann hypothesis, the Poincare conjecture, which was solved,
link |
01:20:00.640
although the solver turned down the prize, right, and four others. But what I like to say,
link |
01:20:06.320
the way that we can see that P versus NP is the biggest of all of these questions
link |
01:20:11.200
is that if you had this fast algorithm, then you could solve all seven of them,
link |
01:20:15.680
okay? You just ask your computer, you know, is there a short proof of the Riemann hypothesis,
link |
01:20:20.880
right? You know, that a machine could, in a language where a machine could verify it,
link |
01:20:25.200
and provided that such a proof exists, then your computer finds it
link |
01:20:28.560
in a short amount of time without having to do a brute force search, okay? So, I mean,
link |
01:20:33.120
those are the stakes of what we're talking about. But I hope that also helps to give your listeners
link |
01:20:38.560
some intuition of why I and most of my colleagues would put our money on P not equaling NP.
link |
01:20:46.080
Is it possible, I apologize this is a really dumb question, but is it possible to,
link |
01:20:50.400
that a proof will come out that P equals NP, but an algorithm that makes P equals NP
link |
01:20:59.360
is impossible to find? Is that like crazy? Okay, well, if P equals NP, it would mean
link |
01:21:05.600
that there is such an algorithm. That it exists, yeah.
link |
01:21:09.360
But, you know, it would mean that it exists. Now, you know, in practice, normally the way that we
link |
01:21:17.200
would prove anything like that would be by finding the algorithm. But there is such a thing as a
link |
01:21:23.040
nonconstructive proof that an algorithm exists. You know, this has really only reared its head,
link |
01:21:28.480
I think, a few times in the history of our field, right? But, you know, it is theoretically possible
link |
01:21:35.200
that such a thing could happen. But, you know, there are, even here, there are some amusing
link |
01:21:40.960
observations that one could make. So there is this famous observation of Leonid Levin, who was,
link |
01:21:47.280
you know, one of the original discoverers of NP completeness, right? And he said,
link |
01:21:51.680
we'll consider the following algorithm that I guarantee will solve the NP problems efficiently,
link |
01:21:58.960
just as provided that P equals NP, okay? Here is what it does. It just runs, you know,
link |
01:22:05.200
it enumerates every possible algorithm in a gigantic infinite list, right? From like in
link |
01:22:11.360
like alphabetical order, right? You know, and many of them maybe won't even compile,
link |
01:22:15.840
so we just ignore those, okay? But now, we just, you know, run the first algorithm,
link |
01:22:20.720
then we run the second algorithm, we run the first one a little bit more,
link |
01:22:24.560
then we run the first three algorithms for a while, we run the first four for a while.
link |
01:22:28.720
This is called dovetailing, by the way. This is a known trick in theoretical computer science,
link |
01:22:35.360
okay? But we do it in such a way that, you know, whatever is the algorithm out there in our list
link |
01:22:42.640
that solves NP complete, you know, the NP problems efficiently, will eventually hit that one,
link |
01:22:48.560
right? And now, the key is that whenever we hit that one, you know, by assumption,
link |
01:22:54.160
it has to solve the problem, it has to find the solution, and once it claims to find a solution,
link |
01:22:59.360
then we can check that ourselves, right? Because these are NP problems, then we can check it.
link |
01:23:04.720
Now, this is utterly impractical, right? You know, you'd have to do this enormous exhaustive search
link |
01:23:11.200
among all the algorithms, but from a certain theoretical standpoint, that is merely a constant
link |
01:23:16.880
prefactor, right? That's merely a multiplier of your running time. So, there are tricks like that
link |
01:23:22.640
one can do to say that, in some sense, the algorithm would have to be constructive. But,
link |
01:23:27.920
you know, in the human sense, you know, it is possible that to, you know, it's conceivable
link |
01:23:33.840
that one could prove such a thing via a nonconstructive method. Is that likely? I don't
link |
01:23:38.960
think so. Not personally. So, that's P and NP, but the complexity zoo is full of wonderful
link |
01:23:46.320
creatures. Well, it's got about 500 of them. 500. So, how do you get, yeah, how do you get more?
link |
01:23:56.160
I mean, just for starters, there is everything that we could do with a conventional computer
link |
01:24:02.560
with a polynomial amount of memory, okay, but possibly an exponential amount of time,
link |
01:24:08.080
because we get to reuse the same memory over and over again. Okay, that is called P space,
link |
01:24:13.360
okay? And that's actually, we think, an even larger class than NP. Okay, well, P is contained
link |
01:24:21.200
in NP, which is contained in P space. And we think that those containments are strict.
link |
01:24:26.640
And the constraint there is on the memory. The memory has to grow
link |
01:24:31.120
polynomially with the size of the process. That's right. That's right. But in P space,
link |
01:24:35.280
we now have interesting things that were not in NP, like as a famous example, you know,
link |
01:24:41.600
from a given position in chess, you know, does white or black have the win? Let's say,
link |
01:24:46.720
assuming provided that the game lasts only for a reasonable number of moves, okay? Or likewise,
link |
01:24:53.520
for go, okay? And, you know, even for the generalizations of these games to arbitrary
link |
01:24:57.920
size boards, because with an eight by eight board, you could say that's just a constant
link |
01:25:01.760
size problem. You just, you know, in principle, you just solve it in O of one time, right?
link |
01:25:06.480
But so we really mean the generalizations of, you know, games to arbitrary size boards here.
link |
01:25:14.080
Or another thing in P space would be, like, I give you some really hard constraint satisfaction
link |
01:25:21.920
problem, like, you know, a traveling salesperson or, you know, packing boxes into the trunk of
link |
01:25:28.880
your car or something like that. And I ask, not just is there a solution, which would be an NP
link |
01:25:33.920
problem, but I ask how many solutions are there, okay? That, you know, count the number of valid
link |
01:25:41.200
solutions. That actually gives, those problems lie in a complexity class called sharp P, or like,
link |
01:25:49.120
it looks like hashtag, like hashtag P, okay, which sits between NP and P space.
link |
01:25:55.760
There's all the problems that you can do in exponential time, okay? That's called exp. So,
link |
01:26:01.760
and by the way, it was proven in the 60s that exp is larger than P, okay? So we know that much.
link |
01:26:09.840
We know that there are problems that are solvable in exponential time that are not solvable in
link |
01:26:14.960
polynomial time, okay? In fact, we even know, we know that there are problems that are solvable in
link |
01:26:20.880
n cubed time that are not solvable in n squared time. And that, those don't help us with a
link |
01:26:26.400
controversy between P and NP at all. Unfortunately, it seems not, or certainly not yet, right?
link |
01:26:31.920
The techniques that we use to establish those things, they're very, very related to how Turing
link |
01:26:37.680
proved the unsolvability of the halting problem, but they seem to break down when we're comparing
link |
01:26:42.560
two different resources, like time versus space, or like, you know, P versus NP, okay? But, you know,
link |
01:26:50.240
I mean, there's what you can do with a randomized algorithm, right? That can be done with a
link |
01:26:55.840
random algorithm, right? That can sometimes, you know, has some probability of making a mistake.
link |
01:27:01.520
That's called BPP, bounded error probabilistic polynomial time. And then, of course, there's
link |
01:27:07.680
one that's very close to my own heart, what you can efficiently do in polynomial time using a
link |
01:27:13.680
quantum computer, okay? And that's called BQP, right? And so, you know, what's understood about
link |
01:27:20.240
it? Okay, so P is contained in BPP, which is contained in BQP, which is contained in P space,
link |
01:27:27.520
okay? So anything you can, in fact, in something very similar to sharp P. BQP is basically,
link |
01:27:35.120
you know, well, it's contained in like P with the magic power to solve sharp P problems, okay?
link |
01:27:41.680
Why is BQP contained in P space?
link |
01:27:44.960
Oh, that's an excellent question. So there is, well, I mean, one has to prove that, okay? But
link |
01:27:53.040
the proof, you could think of it as using Richard Feynman's picture of quantum mechanics,
link |
01:28:00.960
which is that you can always, you know, we haven't really talked about quantum mechanics in this
link |
01:28:06.640
conversation. We did in our previous one.
link |
01:28:08.480
Yeah, we did last time.
link |
01:28:09.600
But yeah, we did last time, okay? But basically, you could always think of a quantum computation
link |
01:28:16.160
as like a branching tree of possibilities where each possible path that you could take
link |
01:28:24.000
through, you know, the space has a complex number attached to it called an amplitude, okay? And now
link |
01:28:30.960
the rule is, you know, when you make a measurement at the end, well, you see a random answer,
link |
01:28:36.080
okay? But quantum mechanics is all about calculating the probability that you're
link |
01:28:40.720
going to see one potential answer versus another one, right? And the rule for calculating the
link |
01:28:47.120
probability that you'll see some answer is that you have to add up the amplitudes for all of the
link |
01:28:53.120
paths that could have led to that answer. And then, you know, that's a complex number, so that,
link |
01:28:58.560
you know, how could that be a probability? Then you take the squared absolute value of the result.
link |
01:29:04.400
That gives you a number between zero and one, okay? So yeah, I just summarized quantum mechanics
link |
01:29:10.800
in like 30 seconds, okay? But now, you know, what this already tells us is that anything I can do
link |
01:29:17.920
with a quantum computer, I could simulate with a classical computer if I only have exponentially
link |
01:29:23.840
more time, okay? And why is that? Because if I have exponential time, I could just write down this
link |
01:29:30.480
entire branching tree and just explicitly calculate each of these amplitudes, right? You know, that
link |
01:29:36.960
will be very inefficient, but it will work, right? It's enough to show that quantum computers could
link |
01:29:42.560
not solve the halting problem or, you know, they could never do anything that is literally
link |
01:29:47.600
uncomputable in Turing's sense, okay? But now, as I said, there's even a stronger result which says
link |
01:29:54.400
that BQP is contained in PSPACE. The way that we prove that is that we say, if all I want is to
link |
01:30:02.400
calculate the probability of some particular output happening, you know, which is all I need to
link |
01:30:08.240
simulate a quantum computer, really, then I don't need to write down the entire quantum state,
link |
01:30:13.520
which is an exponentially large object. All I need to do is just calculate what is the amplitude for
link |
01:30:20.400
that final state. And to do that, I just have to sum up all the amplitudes that lead to that state.
link |
01:30:27.840
Okay, so that's an exponentially large sum, but I can calculate it just reusing the same memory over
link |
01:30:34.240
and over for each term in the sum. And hence the p, in the PSPACE? Hence the PSPACE. Yeah.
link |
01:30:39.600
So what, out of that whole complexity zoo, and it could be BQP, what do you find is the most,
link |
01:30:46.000
the class that captured your heart the most, the most beautiful class that's just, yeah.
link |
01:30:53.680
I used, as my email address, bqpqpoly at gmail.com. Yes, because BQP slash Qpoly,
link |
01:31:03.680
well, you know, amazingly no one had taken it.
link |
01:31:06.240
Amazing, amazing.
link |
01:31:07.760
But, you know, this is a class that I was involved in sort of defining,
link |
01:31:12.400
proving the first theorems about in 2003 or so. So it was kind of close to my heart.
link |
01:31:18.240
But this is like, if we extended BQP, which is the class of everything we can do efficiently
link |
01:31:24.480
with a quantum computer, to allow quantum advice, which means imagine that you had some
link |
01:31:31.280
special initial state, okay, that could somehow help you do computation. And maybe
link |
01:31:36.640
such a state would be exponentially hard to prepare, okay, but maybe somehow these states
link |
01:31:43.040
were formed in the Big Bang or something, and they've just been sitting around ever since,
link |
01:31:46.880
right? If you found one, and if this state could be like ultra power, there are no limits on how
link |
01:31:53.040
powerful it could be, except that this state doesn't know in advance which input you've got,
link |
01:31:58.880
right? It only knows the size of your input. You know, and that's BQP slash Qpoly. So that's
link |
01:32:05.040
one that I just personally happen to love, okay? But, you know, if you're asking like what's the,
link |
01:32:11.200
you know, there's a class that I think is way more beautiful or fundamental than a lot of people
link |
01:32:18.960
even within this field realize that it is. That class is called SZK, or Statistical Zero Knowledge.
link |
01:32:28.080
And, you know, there's a very, very easy way to define this class, which is to say, suppose that
link |
01:32:32.880
I have two algorithms that each sample from probability distributions, right? So each one
link |
01:32:39.280
just outputs random samples according to, you know, possibly different distributions. And now
link |
01:32:45.680
the question I ask is, you know, let's say distributions over strings of n bits, you know,
link |
01:32:50.800
so over an exponentially large space. Now I ask, are these two distributions close or far as
link |
01:32:57.680
close or far as probability distributions? Okay. Any problem that can be reduced to that,
link |
01:33:04.000
you know, that can be put into that form is an SZK problem. And the way that this class was
link |
01:33:10.320
originally discovered was completely different from that and was kind of more complicated. It
link |
01:33:15.040
was discovered as the class of all of the problems that have a certain kind of what's called zero
link |
01:33:21.280
knowledge proof. Zero knowledge proofs are one of the central ideas in cryptography. You know,
link |
01:33:27.920
Shafi Goldwasser and Silvio McCauley won the Turing Award for, you know, inventing them.
link |
01:33:33.200
And they're at the core of even some cryptocurrencies that, you know, people use
link |
01:33:38.960
nowadays. But zero knowledge proofs are ways of proving to someone that something is true,
link |
01:33:45.840
like, you know, that there is a solution to this, you know, optimization problem or that these two
link |
01:33:53.440
graphs are isomorphic to each other or something, but without revealing why it's true, without
link |
01:33:59.440
revealing anything about why it's true. Okay. SZK is all of the problems for which there is such a
link |
01:34:06.720
proof that doesn't rely on any cryptography. Okay. And if you wonder, like, how could such a thing
link |
01:34:13.680
possibly exist, right? Well, like, imagine that I had two graphs and I wanted to convince you
link |
01:34:20.560
that these two graphs are not isomorphic, meaning, you know, I cannot permute one of them so that
link |
01:34:26.080
it's the same as the other one, right? You know, that might be a very hard statement to prove,
link |
01:34:30.720
right? I might need, you know, you might have to do a very exhaustive enumeration of, you know,
link |
01:34:35.280
all the different permutations before you were convinced that it was true. But what if there were
link |
01:34:40.000
some all knowing wizard that said to you, look, I'll tell you what, just pick one of the graphs
link |
01:34:45.920
randomly, then randomly permute it, then send it to me and I will tell you which graph you started
link |
01:34:52.320
with. Okay. And I will do that every single time. Right. And let's say that that wizard did that a
link |
01:35:02.720
hundred times and it was right every time. Yeah. Right. Now, if the graphs were isomorphic, then,
link |
01:35:08.240
you know, it would have been flipping a coin each time, right? It would have had only a one and two
link |
01:35:13.120
to the 100 power chance of, you know, of guessing right each time. But, you know, so, so if it's
link |
01:35:18.800
right every time, then now you're statistically convinced that these graphs are not isomorphic,
link |
01:35:24.240
even though you've learned nothing new about why they aren't. So fascinating. So yeah. So,
link |
01:35:28.720
so SDK is all of the problems that have protocols like that one, but it has this beautiful other
link |
01:35:35.040
characterization. It's shown up again and again in my, in my own work and, you know, a lot of
link |
01:35:40.160
people's work. And I think that it really is one of the most fundamental classes. It's just that
link |
01:35:45.200
people didn't realize that when it was first discovered. So we're living in the middle of
link |
01:35:49.920
a pandemic currently. Yeah. How has your life been changed or no better to ask, like, how has your
link |
01:35:56.720
perspective of the world change with this world changing event of a pandemic overtaking the entire
link |
01:36:03.360
world? Yeah. Well, I mean, I mean, all of our lives have changed, you know, like, I guess,
link |
01:36:08.800
as with no other event since I was born, you know, you would have to go back to world war II
link |
01:36:13.760
for something, I think of this magnitude, you know, on, you know, the way that we live our lives
link |
01:36:19.280
as for how it has changed my worldview. I think that the, the failure of institutions,
link |
01:36:26.240
you know, like, like, like the CDC, like, you know, other institutions that we sort of thought
link |
01:36:32.720
were, were trustworthy, like a lot of the media was staggering, was, was absolutely breathtaking.
link |
01:36:40.720
It is something that I would not have predicted. Right. I think I, I wrote on my blog that, you
link |
01:36:46.960
know, the, you know, it's, it's, it's fascinating to like rewatch the movie Contagion from a decade
link |
01:36:53.680
ago, right. That correctly foresaw so many aspects of, you know, what was going on, you know, an
link |
01:37:00.880
airborne, you know, virus originates in China, spreads to, you know, much of the world, you know,
link |
01:37:06.800
shuts everything down until a vaccine can be developed. You know, everyone has to stay at home,
link |
01:37:12.800
you know, you know, it gets, you know, an enormous number of things, right. Okay. But the one thing
link |
01:37:18.480
that they could not imagine, you know, is that like in this movie, everyone from the government
link |
01:37:23.600
is like hyper competent, hyper, you know, dedicated to the public good, right. And you
link |
01:37:30.320
know, yeah, they're the, they're the best of the best, you know, they could, you know, and, and
link |
01:37:33.680
there are these conspiracy theorists, right. Who think, you know, you know, this is all fake news.
link |
01:37:39.760
There's no, there's not really a pandemic. And those are some random people on the internet who
link |
01:37:44.400
the hyper competent government people have to, you know, oppose, right. They, you know, in, in trying
link |
01:37:49.680
to envision the worst thing that could happen, like, you know, the, the, there was a failure of
link |
01:37:55.040
imagination. The movie makers did not imagine that the conspiracy theorists and the, you know,
link |
01:38:01.440
and the incompetence and the nutcases would have captured our institutions and be the ones actually
link |
01:38:07.200
running things. So you had a certain, I love competence in all walks of life. I love, I get
link |
01:38:13.520
so much energy. I'm so excited by people who do amazing job. And I like you, or maybe you can
link |
01:38:19.280
clarify, but I had maybe not intuition, but I hope that government at its best could be ultra
link |
01:38:24.240
competent. What, first of all, two questions, like how do you explain the lack of confidence
link |
01:38:31.200
and the other, maybe on the positive side, how can we build a more competent government?
link |
01:38:36.960
Well, there's an election in two months. I mean, you have a faith that the election,
link |
01:38:41.680
I, you know, it's not going to fix everything, but you know, it's like,
link |
01:38:45.920
I feel like there is a ship that is sinking and you could at least stop the sinking.
link |
01:38:49.760
But, you know, I think that there are much, much deeper problems. I mean, I think that,
link |
01:38:56.960
you know, it is plausible to me that, you know, a lot of the failures, you know, with the CDC,
link |
01:39:03.520
with some of the other health agencies, even, you know, predate Trump, you know, predate the,
link |
01:39:09.840
you know, right wing populism that has sort of taken over much of the world now. And, you know,
link |
01:39:16.400
I think that, you know, it is, you know, it is very, I'm actually, you know, I've actually been
link |
01:39:23.760
strongly in favor of, you know, rushing vaccines of, you know, I thought that we could have done,
link |
01:39:31.360
you know, human challenge trials, you know, which were not done, right? We could have, you know,
link |
01:39:36.880
like had, you know, volunteers, you know, to actually, you know, be, you know, get vaccines,
link |
01:39:44.480
get, you know, exposed to COVID. So innovative ways of accelerating what we've done previously
link |
01:39:49.680
over a long time. I thought that, you know, each month that a vaccine is closer is like trillions
link |
01:39:56.000
of dollars. Are you surprised? And of course, lives, you know, at least, you know, hundreds
link |
01:40:01.120
of thousands of lives. Are you surprised that it's taking this long? We still don't have a plan.
link |
01:40:05.680
There's still not a feeling like anyone is actually doing anything in terms of alleviating,
link |
01:40:11.840
like any kind of plan. So there's a bunch of stuff, there's vaccine, but you could also do
link |
01:40:16.080
a testing infrastructure where everybody's tested nonstop with contact tracing, all that kind of.
link |
01:40:21.200
Well, I mean, I'm as surprised as almost everyone else. I mean, this is a historic failure. It is
link |
01:40:27.520
one of the biggest failures in the 240 year history of the United States, right? And we should
link |
01:40:33.360
be, you know, crystal clear about that. And, you know, one thing that I think has been missing,
link |
01:40:38.960
you know, even from the more competent side is like, you know, is sort of the World War II
link |
01:40:45.840
mentality, right? The, you know, the mentality of, you know, let's just, you know, you know,
link |
01:40:52.960
if we can, by breaking a whole bunch of rules, you know, get a vaccine and, you know, and even
link |
01:40:59.920
half the amount of time as we thought, then let's just do that because, you know, like we have to
link |
01:41:07.200
weigh all of the moral qualms that we have about doing that against the moral qualms of not doing.
link |
01:41:13.520
And one key little aspect to that that's deeply important to me, and we'll go into that topic
link |
01:41:18.880
next, is the World War II mentality wasn't just about, you know, breaking all the rules to get
link |
01:41:24.320
the job done. There was a togetherness to it. So I would, if I were president right now, it seems
link |
01:41:31.600
quite elementary to unite the country because we're facing a crisis. It's easy to make the
link |
01:41:39.440
virus the enemy. And it's very surprising to me that the division has increased as opposed to
link |
01:41:46.240
decrease. That's heartbreaking. Yeah. Well, look, I mean, it's been said by others that this is the
link |
01:41:51.360
first time in the country's history that we have a president who does not even pretend to, you know,
link |
01:41:57.200
want to unite the country. I mean, Lincoln, who fought a civil war, said he wanted to unite the
link |
01:42:06.080
country. And I do worry enormously about what happens if the results of this election are
link |
01:42:15.120
contested. And will there be violence as a result of that? And will we have a clear path of succession?
link |
01:42:22.320
And, you know, look, I mean, you know, this is all we're going to find out the answers to this in
link |
01:42:27.120
two months. And if none of that happens, maybe I'll look foolish. But I am willing to go on the
link |
01:42:31.840
record and say, I am terrified about that. Yeah, I've been reading The Rise and Fall of the Third
link |
01:42:37.040
Reich. So if I can, this is like one little voice just to put out there that I think November will
link |
01:42:46.160
be a really critical month for people to breathe and put love out there. Do not, you know, anger in
link |
01:42:55.680
those in that context, no matter who wins, no matter what is said, will destroy our country,
link |
01:43:01.360
may destroy our country, may destroy the world because of the power of the country. So it's
link |
01:43:05.760
really important to be patient, loving, empathetic. Like one of the things that troubles me is that
link |
01:43:11.600
even people on the left are unable to have a love and respect for people who voted for Trump. They
link |
01:43:17.920
can't imagine that there's good people that could vote for the opposite side. Oh, I know there are
link |
01:43:23.600
because I know some of them, right? I mean, you know, it's still, you know, maybe it baffles me,
link |
01:43:29.040
but, you know, I know such people. Let me ask you this. It's also heartbreaking to me
link |
01:43:34.800
on the topic of cancel culture. So in the machine learning community, I've seen it a little bit
link |
01:43:39.120
that there's aggressive attacking of people who are trying to have a nuanced conversation about
link |
01:43:46.800
things. And it's troubling because it feels like nuanced conversation is the only way to talk about
link |
01:43:55.360
difficult topics. And when there's a thought police and speech police on any nuanced conversation
link |
01:44:02.320
that everybody has to like in a animal farm chant that racism is bad and sexism is bad, which is
link |
01:44:09.280
things that everybody believes and they can't possibly say anything nuanced. It feels like it
link |
01:44:15.440
goes against any kind of progress from my kind of shallow perspective. But you've written a little
link |
01:44:20.560
bit about cancel culture. Do you have thoughts there? Well, I mean, to say that I am opposed to,
link |
01:44:28.000
you know, this trend of cancellations or of shouting people down rather than engaging them,
link |
01:44:35.040
that would be a massive understatement, right? And I feel like, you know, I have put my money
link |
01:44:40.640
where my mouth is, you know, not as much as some people have, but, you know, I've tried to do
link |
01:44:46.160
something. I mean, I have defended, you know, some unpopular people and unpopular, you know, ideas
link |
01:44:52.960
on my blog. I've, you know, tried to defend, you know, norms of open discourse, of, you know,
link |
01:45:02.160
reasoning with our opponents, even when I've been shouted down for that on social media,
link |
01:45:07.760
you know, called a racist, called a sexist, all of those things. And which, by the way,
link |
01:45:11.840
I should say, you know, I would be perfectly happy to, you know, if we had time to say, you know,
link |
01:45:17.680
you know, 10,000 times, you know, my hatred of racism, of sexism, of homophobia, right?
link |
01:45:25.600
But what I don't want to do is to cede to some particular political faction the right to define
link |
01:45:33.600
exactly what is meant by those terms to say, well, then you have to agree with all of these other
link |
01:45:39.360
extremely contentious positions or else you are a misogynist or else you are a racist, right?
link |
01:45:46.000
I say that, well, no, you know, don't I or, you know, don't people like me also get a say in the
link |
01:45:54.240
discussion about, you know, what is racism, about what is going to be the most effective to combat
link |
01:46:00.080
racism, right? And, you know, this cancellation mentality, I think, is spectacularly ineffective
link |
01:46:08.480
at its own professed gall of, you know, combating racism and sexism.
link |
01:46:13.200
What's a positive way out? So I, I try to, I don't know if you see what I do on Twitter,
link |
01:46:19.440
but I, on Twitter, I mostly, in my whole, in my life, I've actually, it's who I am to the core is
link |
01:46:25.680
like, I really focus on the positive and I try to put love out there in the world. And still,
link |
01:46:32.720
I get attacked. And I look at that and I wonder like,
link |
01:46:36.720
You too? I didn't know.
link |
01:46:38.240
Like, I haven't actually said anything difficult and nuanced. You talk about somebody like
link |
01:46:43.920
Steven Pinker, who I'm actually don't know the full range of things that he's attacked for,
link |
01:46:50.960
but he tries to say difficult. He tries to be thoughtful about difficult topics.
link |
01:46:55.440
He does.
link |
01:46:55.840
And obviously he just gets slaughtered by.
link |
01:46:59.920
Well, I mean, yes, but it's also amazing how well Steve has withstood it. I mean,
link |
01:47:06.400
he just survived that attempt to cancel him just a couple of months ago, right?
link |
01:47:10.880
Psychologically, he survives it too, which worries me because I don't think I can.
link |
01:47:15.360
Yeah, I've gotten to know Steve a bit. He is incredibly unperturbed by this stuff.
link |
01:47:20.960
And I admire that and I envy it. I wish that I could be like that. I mean, my impulse when I'm
link |
01:47:26.320
getting attacked is I just want to engage every single like anonymous person on Twitter and Reddit
link |
01:47:32.960
who is saying mean stuff about me. And I want to just say, well, look, can we just talk this over
link |
01:47:37.760
for an hour? And then you'll see that I'm not that bad. And sometimes that even works. The
link |
01:47:43.280
problem is then there's the 20,000 other ones.
link |
01:47:48.080
That's not, but psychologically, does that wear on you?
link |
01:47:51.440
It does. It does. But yeah, I mean, in terms of what is the solution, I mean, I wish I knew,
link |
01:47:56.080
right? And so in a certain way, these problems are maybe harder than P versus NP, right?
link |
01:48:02.240
I mean, but I think that part of it has to be that I think that there's a lot of sort of silent
link |
01:48:10.560
support for what I'll call the open discourse side, the reasonable enlightenment side.
link |
01:48:17.360
And I think that that support has to become less silent, right? I think that a lot of people
link |
01:48:23.120
just sort of agree that a lot of these cancellations and attacks are ridiculous,
link |
01:48:30.560
but are just afraid to say so, right? Or else they'll get shouted down as well, right? That's
link |
01:48:36.000
just the standard witch hunt dynamic, which, of course, this faction understands and exploits to
link |
01:48:42.560
its great advantage. But more people just said, we're not going to stand for this, right? This
link |
01:48:52.880
is, guess what? We're against racism too. But what you're doing is ridiculous, right? And the
link |
01:49:01.440
hard part is it takes a lot of mental energy. It takes a lot of time. Even if you feel like
link |
01:49:07.120
you're not going to be canceled or you're staying on the safe side, it takes a lot of time to
link |
01:49:13.840
phrase things in exactly the right way and to respond to everything people say.
link |
01:49:19.200
So, but I think that the more people speak up from all political persuasions, from all walks
link |
01:49:29.520
of life, then the easier it is to move forward. Since we've been talking about love, can you,
link |
01:49:37.520
last time I talked to you about meaning of life a little bit, but here has, it's a weird question
link |
01:49:43.520
to ask a computer scientist, but has love for other human beings, for things, for the world
link |
01:49:50.720
around you played an important role in your life? Have you, it's easy for a world class
link |
01:49:59.440
computer scientist, you could even call yourself like a physicist, everything to be lost in the
link |
01:50:06.160
books. Is the connection to other humans, love for other humans played an important role?
link |
01:50:11.040
I love my kids. I love my wife. I love my parents. I'm probably not different from most people in
link |
01:50:24.880
loving their families and in that being very important in my life. Now, I should remind you
link |
01:50:32.880
that I am a theoretical computer scientist. If you're looking for deep insight about the nature
link |
01:50:38.880
of love, you're probably looking in the wrong place to ask me, but sure, it's been important.
link |
01:50:45.920
But is there something from a computer science perspective to be said about love? Is that even
link |
01:50:53.200
beyond into the realm of consciousness? There was this great cartoon, I think it
link |
01:50:59.840
was one of the classic XKCDs where it shows a heart and it's squaring the heart, taking the
link |
01:51:07.520
four year transform of the heart, integrating the heart, each thing and then it says my normal
link |
01:51:15.680
approach is useless here. I'm so glad I asked this question. I think there's no better way to
link |
01:51:22.560
end this. I hope we get a chance to talk again. This has been an amazing, cool experiment to do
link |
01:51:26.960
it outside. I'm really glad you made it out. Yeah. Well, I appreciate it a lot. It's been a
link |
01:51:31.040
pleasure and I'm glad you were able to come out to Austin. Thanks. Thanks for listening to this
link |
01:51:36.640
conversation with Scott Aaronson. And thank you to our sponsors, 8sleep, SimpliSafe, ExpressVPN,
link |
01:51:44.480
and BetterHelp. Please check out these sponsors in the description to get a discount and to
link |
01:51:50.160
support this podcast. If you enjoy this thing, subscribe on YouTube, review it with five stars
link |
01:51:56.000
on Apple Podcast, follow on Spotify, support on Patreon, or connect with me on Twitter
link |
01:52:01.680
at Lex Friedman. And now let me leave you with some words from Scott Aaronson that I also gave
link |
01:52:07.840
to you in the introduction, which is, if you always win, then you're probably doing something
link |
01:52:14.240
wrong. Thank you for listening and for putting up with the intro and outro in this strange room in
link |
01:52:21.120
the middle of nowhere. And I very much hope to see you next time in many more ways than one.