back to index

Stephen Wolfram: Cellular Automata, Computation, and Physics | Lex Fridman Podcast #89


small model | large model

link |
00:00:00.000
The following is a conversation with Stephen Wolfram, a computer scientist, mathematician,
link |
00:00:04.480
and theoretical physicist who is the founder and CEO of Wolfram Research, a company behind
link |
00:00:10.480
Mathematica, Wolfram Alpha, Wolfram Language, and the new Wolfram Physics Project.
link |
00:00:16.160
He's the author of several books, including A New Kind of Science, which, on a personal note,
link |
00:00:22.400
was one of the most influential books in my journey in computer science and artificial
link |
00:00:27.520
intelligence. It made me fall in love with the mathematical beauty and power of cellular
link |
00:00:32.640
automata. It is true that perhaps one of the criticisms of Stephen is on a human level,
link |
00:00:39.520
that he has a big ego, which prevents some researchers from fully enjoying the content
link |
00:00:44.720
of his ideas. We talk about this point in this conversation. To me, ego can lead you astray,
link |
00:00:51.280
but can also be a superpower, one that fuels bold, innovative thinking that refuses to surrender to
link |
00:00:58.400
the cautious ways of academic institutions. And here, especially, I ask you to join me in looking
link |
00:01:05.440
past the peculiarities of human nature and opening your mind to the beauty of ideas
link |
00:01:10.880
in Stephen's work and in this conversation. I believe Stephen Wolfram is one of the most
link |
00:01:16.160
original minds of our time, and, at the core, is a kind, curious, and brilliant human being.
link |
00:01:22.960
This conversation was recorded in November 2019, when the Wolfram Physics Project was underway,
link |
00:01:28.800
but not yet ready for public exploration as it is now. We now agreed to talk again,
link |
00:01:34.320
probably multiple times, in the near future, so this is round one, and stay tuned for round two soon.
link |
00:01:40.240
This is the Artificial Intelligence Podcast. If you enjoy it, subscribe on YouTube,
link |
00:01:46.000
review it with 5 stars on Apple Podcasts, support it on Patreon, or simply connect with me on Twitter
link |
00:01:51.120
at Lex Friedman, spelled F R I D M A N. As usual, I'll do a few minutes of ads now,
link |
00:01:57.200
and never any ads in the middle that can break the flow of the conversation.
link |
00:02:00.640
I hope that works for you. It doesn't hurt the listening experience.
link |
00:02:03.680
Quick summary of the ads. Two sponsors, ExpressVPN and Cash App. Please consider supporting the
link |
00:02:10.800
podcast by getting ExpressVPN at expressvpn.com slash lexpod and downloading Cash App and using
link |
00:02:18.160
code Lex Podcast. This show is presented by Cash App, the number one finance app in the App Store.
link |
00:02:25.360
When you get it, use code Lex Podcast. Cash App lets you send money to friends, buy Bitcoin,
link |
00:02:31.120
and invest in the stock market with as little as $1. Since Cash App does fractional share trading,
link |
00:02:36.800
let me mention that the order execution algorithm that works behind the scenes to create the
link |
00:02:41.200
abstraction of fractional orders is an algorithmic marvel. It's a big props to the Cash App engineers
link |
00:02:47.120
for solving a hard problem that in the end provides an easy interface that takes a step
link |
00:02:51.840
up to the next layer of abstraction over the stock market. This makes trading more accessible
link |
00:02:57.520
for new investors and diversification much easier. So again, if you get Cash App from the App Store,
link |
00:03:03.680
Google Play, and use the code Lex Podcast, you get $10 and Cash App will also donate $10 to first,
link |
00:03:10.880
an organization that is helping to advance robotics and STEM education for young people around the
link |
00:03:15.680
world. This show is presented by ExpressVPN. Get it at expressvpn.com slash lexpod to get a
link |
00:03:25.200
discount and to support this podcast. I've been using ExpressVPN for many years. I love it.
link |
00:03:31.680
It's really easy to use. Press the big power on button and your privacy is protected. And if you
link |
00:03:37.200
like, you can make it look like your location is anywhere else in the world. This has a large
link |
00:03:42.560
number of obvious benefits. Certainly, it allows you to access international versions of streaming
link |
00:03:47.440
websites like the Japanese Netflix or the UK Hulu. ExpressVPN works on any device you can
link |
00:03:54.800
imagine. I use it on Linux. Shout out to Ubuntu. New version coming out soon actually. Windows,
link |
00:04:01.680
Android, but it's available anywhere else too. Once again, get it at expressvpn.com slash lexpod
link |
00:04:08.640
to get a discount and to support this podcast. And now here's my conversation with Steven Wolfram.
link |
00:04:16.800
You and your son Christopher helped create the alien language in the movie Arrival.
link |
00:04:20.880
So let me ask maybe a bit of a crazy question. But if aliens were to visit us on Earth,
link |
00:04:27.840
do you think we would be able to find a common language?
link |
00:04:32.560
Well, by the time we're saying aliens are visiting us, we've already prejudiced the whole story.
link |
00:04:38.240
Because the concept of an alien actually visiting, so to speak, we already know
link |
00:04:43.760
they're kind of things that make sense to talk about visiting. So we already know they exist in
link |
00:04:49.520
the same kind of physical setup that we do. They're not, you know, it's not just radio
link |
00:04:56.800
signals. It's an actual thing that shows up and so on. So I think in terms of, you know,
link |
00:05:03.200
can one find ways to communicate? Well, the best example we have of this right now is AI.
link |
00:05:10.240
I mean, that's our first sort of example of alien intelligence. And the question is,
link |
00:05:14.720
how well do we communicate with AI? You know, if you were to say, if you were in the middle of a
link |
00:05:19.280
neural net, and you open it up, and it's like, what are you thinking? Can you discuss things
link |
00:05:25.120
with it? It's not easy, but it's not absolutely impossible. So I think I think by the time,
link |
00:05:30.880
but given the setup of your question, aliens visiting, I think the answer is, yes, one will
link |
00:05:37.760
be able to find some form of communication, whatever communication means, communication
link |
00:05:41.440
requires notions of purpose and things like this. It's a kind of philosophical quagmire.
link |
00:05:46.880
So if AI is a kind of alien life form, what do you think visiting looks like? So if we look at
link |
00:05:55.200
aliens visiting, and we'll get to discuss computation and the world of computation,
link |
00:06:01.200
but if you were to imagine, you said you already prejudiced something by saying you visit,
link |
00:06:05.680
but how would aliens visit? By visit, there's kind of an implication, and here we're using the
link |
00:06:13.280
imprecision of human language, you know, in a world of the future, and if that's represented in
link |
00:06:18.400
computational language, we might be able to take the concept visit and go look in the documentation
link |
00:06:25.520
basically and find out exactly what does that mean, what properties does it have and so on.
link |
00:06:29.440
But by visit, in ordinary human language, I'm kind of taking it to be there's, you know,
link |
00:06:36.240
something, a physical embodiment that shows up in a spacecraft, since we kind of know that
link |
00:06:42.400
that's necessary. We're not imagining it's just, you know, photons showing up in a radio signal
link |
00:06:49.680
that, you know, a photon's in some very elaborate pattern. We're imagining it's physical things
link |
00:06:56.080
made of atoms and so on that show up. Can it be photons in a pattern?
link |
00:07:01.120
Well, that's good question. I mean, whether there is the possibility, you know, what counts as
link |
00:07:06.400
intelligence? Good question. I mean, it's, you know, and I used to think there was sort of a,
link |
00:07:13.280
oh, there'll be, you know, it'll be clear what it means to find extraterrestrial intelligence,
link |
00:07:17.280
et cetera, et cetera, et cetera. I've increasingly realized as a result of science that I've done
link |
00:07:22.080
that there really isn't a bright line between the intelligent and the merely computational,
link |
00:07:27.680
so to speak. So, you know, in our kind of everyday sort of discussion, we'll say things like,
link |
00:07:33.040
you know, the weather has a mind of its own. Well, let's unpack that question. You know,
link |
00:07:37.920
we realize that there are computational processes that go on that determine the fluid dynamics of
link |
00:07:43.680
this and that and the atmosphere, et cetera, et cetera, et cetera. How do we distinguish that
link |
00:07:48.720
from the processes that go on in our brains of, you know, the physical processes that go on in
link |
00:07:53.440
our brains? How do we separate those? How do we say the physical processes going on that
link |
00:08:00.160
represent sophisticated computations in the weather? Oh, that's not the same as the physical
link |
00:08:04.720
processes that go on that represent sophisticated computations in our brains. The answer is,
link |
00:08:08.960
I don't think there is a fundamental distinction. I think the distinction for us is that there's
link |
00:08:14.320
kind of a thread of history and so on that connects kind of what happens in different brains
link |
00:08:21.280
to each other, so to speak. And it's a, you know, what happens in the weather is something which
link |
00:08:25.840
is not connected by sort of a thread of civilizational history, so to speak, to what we're used to.
link |
00:08:33.120
In our story, in the stories that the human brain told us, but maybe the weather has its own
link |
00:08:37.520
stories that it tells itself. Absolutely. Absolutely. And that's where we run into trouble
link |
00:08:42.160
thinking about extraterrestrial intelligence because, you know, it's like that pulsar magnetosphere
link |
00:08:48.080
that's generating these very elaborate radio signals. You know, is that something that we
link |
00:08:52.560
should think of as being this whole civilization that's developed over the last however long,
link |
00:08:57.600
you know, millions of years of processes going on in the neutron star or whatever versus what,
link |
00:09:05.120
you know, what we're used to in human intelligence. And I think it's a, I think in the end,
link |
00:09:09.680
you know, when people talk about extraterrestrial intelligence and where is it and the whole,
link |
00:09:13.440
you know, Fermi paradox of how come there's no other signs of intelligence in the universe.
link |
00:09:19.360
My guess is that we've got sort of two alien forms of intelligence that we're dealing with,
link |
00:09:26.160
artificial intelligence and sort of physical or extraterrestrial intelligence. And my guess
link |
00:09:32.400
is people will sort of get comfortable with the fact that both of these have been achieved
link |
00:09:36.560
around the same time. And in other words, people will say, well, yes, we're used to computers,
link |
00:09:44.080
things we've created, digital things we've created being sort of intelligent like we are.
link |
00:09:48.160
And they'll say, oh, we're kind of also used to the idea that there are things around the universe
link |
00:09:52.400
that are kind of intelligent like we are, except they don't share the sort of
link |
00:09:57.760
civilizational history that we have. And so we don't, you know, they're a different branch.
link |
00:10:03.600
I mean, it's similar to when you talk about life, for instance, I mean, you kind of said life form,
link |
00:10:08.800
I think, almost synonymously with intelligence, which I don't think is, you know, the AIs will
link |
00:10:15.920
be upset to hear you equate those two things. Because I have really probably implied biological
link |
00:10:22.240
life. Right. Right. Right. But you're saying, I mean, we'll explore this more, but you're saying
link |
00:10:27.440
it's really a spectrum and it's all just a kind of computation. And so it's a full spectrum. And we
link |
00:10:34.480
just make ourselves special by weaving a narrative around our particular kinds of computation.
link |
00:10:40.480
Yes. I mean, the thing that I think I've kind of come to realize is, you know, at some level,
link |
00:10:45.200
it's a little depressing to realize that there's so little or liberating. Well, yeah, but I mean,
link |
00:10:50.880
it's, you know, it's the story of science, right? And, you know, from Copernicus on,
link |
00:10:55.280
it's like, you know, first we were like convinced our planets at the center of the universe.
link |
00:11:00.160
No, that's not true. Well, then we were convinced there's something very special about the chemistry
link |
00:11:04.160
that we have as biological organisms. That's not really true. And then we're still holding out that
link |
00:11:10.240
hope over this intelligence thing we have. That's really special. I don't think it is. However,
link |
00:11:16.160
in a sense, as you say, it's kind of liberating for the following reason that you realize that
link |
00:11:21.120
what's special is the details of us, not some abstract attribute that, you know, we could wonder,
link |
00:11:30.480
oh, is something else going to come along and, you know, also have that abstract attribute?
link |
00:11:35.200
Well, yes, every abstract attribute we have, something else has it. But the full details of
link |
00:11:40.960
our kind of history of our civilization and so on, nothing else has that. That's what,
link |
00:11:46.720
you know, that's our story, so to speak. And that's sort of almost by definition special.
link |
00:11:52.560
So I view it as not being such a, I mean, initially I was like, this is bad. This is kind
link |
00:11:59.280
of, you know, how can we have self respect about the things that we do? Then I realized the details
link |
00:12:05.520
of the things we do, they are the story. Everything else is kind of a blank canvas.
link |
00:12:09.840
So maybe on a small tangent, you just made me think of it, but what do you make of the monoliths
link |
00:12:18.080
in 2001 Space Odyssey in terms of aliens communicating with us and sparking the kind of particular
link |
00:12:26.800
intelligent computation that we humans have? Is there anything interesting to get from that
link |
00:12:33.040
sci fi? Yeah, I mean, I think what's fun about that is, you know, the monoliths are these,
link |
00:12:40.400
you know, one to four to nine perfect cuboid things. And in the, you know, Earth a million
link |
00:12:46.560
years ago, whatever they were portraying with a bunch of apes and so on, a thing that has that
link |
00:12:51.440
level of perfection seems out of place. It seems very kind of constructed, very engineered.
link |
00:12:58.960
So that's an interesting question. What is the, you know, what's the techno signature, so to
link |
00:13:04.080
speak? What is it that you see it somewhere and you say, my gosh, that had to be engineered?
link |
00:13:10.800
Now, the fact is, we see crystals, which are also very perfect. And, you know, the perfect ones
link |
00:13:16.960
are very perfect. They're nice polyhedral or whatever. And so in that sense, if you say, well,
link |
00:13:22.480
it's a sign of sort of, it's a techno signature that it's a perfect, you know, a perfect polygonal
link |
00:13:29.600
shape, polyhedral shape. That's not true. And so then it's an interesting question. What is the,
link |
00:13:37.280
what is the right signature? I mean, like, you know, Gauss famous mathematician, you know,
link |
00:13:41.840
he had this idea, you should cut down the Siberian forest and the shape of sort of a typical image
link |
00:13:47.680
of the proof of the Pythagorean theorem on the grounds that is a kind of cool idea didn't get
link |
00:13:52.640
done. But, you know, it was on the grounds that the Martians would see that and realize, gosh,
link |
00:13:57.760
there are mathematicians out there. It's kind of, you know, it's the, in his theory of the world,
link |
00:14:02.960
that was probably the best advertisement for the cultural achievements of our species. But, you
link |
00:14:08.960
know, it's a reasonable question. What do you, what can you send or create that is a sign of
link |
00:14:16.480
intelligence in its creation, or even intention in its creation?
link |
00:14:20.800
Yeah, you talk about, if we were to send a beacon, can you, what should we send? Is math
link |
00:14:27.280
our greatest creation? Is, what is our greatest creation?
link |
00:14:31.040
I think, I think, and it's a, it's a philosophically doomed issue. I mean, in other words,
link |
00:14:36.720
you send something, you think it's fantastic. But it's kind of like, we are part of the universe,
link |
00:14:42.720
we make things that are, you know, things that happen in the universe. Computation,
link |
00:14:48.080
which is sort of the thing that we are in some abstract sense, and sense using to create all
link |
00:14:53.680
these elaborate things we create, is surprisingly ubiquitous. In other words, we might have thought
link |
00:15:01.120
that, you know, we've built this whole giant engineering stack that's led us to microprocessors,
link |
00:15:06.880
that's led us to be able to do elaborate computations. But this idea that computations
link |
00:15:13.200
are happening all over the place. The only question is whether, whether there's a thread
link |
00:15:17.600
that connects our human intentions to what those computations are. And so I think, I
link |
00:15:23.840
think this question of what do you send to kind of show off our civilization in the best possible
link |
00:15:29.600
way. I think any kind of almost random slab of stuff we've produced is about equivalence
link |
00:15:37.520
to everything else. I think it's one of these things where
link |
00:15:39.680
It's a non romantic way of phrasing it. I just, sorry to interrupt, but I just talked to
link |
00:15:46.000
Andrew Ian, who's the wife of Carl Sagan. And so I don't know if you're familiar with the
link |
00:15:52.000
Voyager. I mean, she was part of sending, I think brainwaves of, you know, I want you to... Wasn't it hers?
link |
00:15:59.280
It was hers. Her brainwaves when she was first falling in love with Carl Sagan, right? So this
link |
00:16:04.000
beautiful story that perhaps you would shed down the power of that by saying we might as well
link |
00:16:11.600
send anything else. And that's interesting. All of it is kind of an interesting, peculiar thing.
link |
00:16:18.160
Yeah, yeah, right. Well, I mean, I think it's kind of interesting to see on the Voyager,
link |
00:16:21.280
you know, golden record thing. One of the things that's kind of cute about that is, you know,
link |
00:16:25.680
it was made, one was it in the late 70s, early 80s. And, you know, one of the things it's a
link |
00:16:31.760
phonograph record. And it has a diagram of how to play a phonograph record. And, you know,
link |
00:16:37.760
it's kind of like it's shocking that in just 30 years, if you show that to a random kid of today,
link |
00:16:43.680
and you show them that diagram, I've tried this experiment, they're like, I don't know what the
link |
00:16:47.520
heck this is. And the best anybody can think of is, you know, take the whole record, forget the
link |
00:16:52.800
fact that it has some kind of helical track in it, just image the whole thing, and see what's
link |
00:16:57.520
there. That's what we would do today. In only 30 years, our technology has kind of advanced to the
link |
00:17:03.520
point where the playing of a helical, you know, mechanical track on a phonograph record is now
link |
00:17:09.440
something bizarre. So, you know, it's, it's a that's a cautionary tale, I would say, in terms of
link |
00:17:15.120
the ability to make something that in detail, sort of leads by the nose, some, you know,
link |
00:17:22.320
the aliens or whatever, to do something, it's like, no, you know, best you're going to do,
link |
00:17:27.600
as I say, if we were doing this today, we would not build a helical scan thing with a needle,
link |
00:17:33.920
we would just take some high resolution imaging system, and get all the bits off it,
link |
00:17:38.480
and say, oh, it's a big nuisance that they put in a helix, you know, in a spiral, let's
link |
00:17:43.120
some, let's just, you know, unravel the spiral, and, and start from there.
link |
00:17:50.400
Do you think, and this will get into trying to figure out
link |
00:17:55.280
interpretability of AI, interpretability of computation, being able to communicate
link |
00:18:00.480
with various kinds of computations, do you think it would be able to, if you're put,
link |
00:18:04.720
put your alien hat on, figure out this record, how to play this record?
link |
00:18:10.240
Well, it's a question of what one wants to do. I mean,
link |
00:18:13.920
Understand what the other party was trying to communicate or understand anything about the
link |
00:18:19.760
other party. What does understanding mean? I mean, that's the issue. The issue is,
link |
00:18:23.600
it's like when people were trying to do natural language understanding for computers, right? So,
link |
00:18:28.880
people tried to do that for years. It wasn't clear what it meant. In other words, you take
link |
00:18:34.880
your piece of English or whatever, and you say, gosh, my computer has understood this. Okay,
link |
00:18:40.480
that's nice. What can you do with that? Well, so for example, when we did, you know, Built
link |
00:18:46.080
Wolf Malfur, you know, one of the things was, it's, you know, it's doing question answering and so on,
link |
00:18:52.160
it needs to do natural language understanding. The reason that I realized after the fact,
link |
00:18:57.520
the reason we were able to do natural language understanding quite well, and people hadn't
link |
00:19:02.320
before, the number one thing was, we had an actual objective for the natural language
link |
00:19:07.520
understanding, we were trying to turn the natural language into computation into this
link |
00:19:10.960
computational language that we could then do things with. Now, similarly, when you imagine
link |
00:19:15.520
your alien, you say, okay, we're playing them the record. Did they understand it? Well,
link |
00:19:21.040
depends what you mean. If they, you know, if we, if there's a representation that they have,
link |
00:19:25.520
if it converts to some representation, where we can say, Oh, yes, that is a, that's a
link |
00:19:30.560
representation that we can recognize is represents understanding, then all well and good. But
link |
00:19:36.800
actually the only ones that I think we can say would represent understanding are ones that will
link |
00:19:42.080
then do things that we humans kind of recognize as being useful to us. Maybe try and understand,
link |
00:19:50.320
quantify how technologically advanced this particular civilization is. So are they a threat
link |
00:19:55.840
to us from a military perspective? Yeah, that's probably the kind of first kind of understanding
link |
00:20:01.760
that will be interested in. Gosh, that's so hard. I mean, that's like in the arrival movie,
link |
00:20:06.000
that was sort of one of the key questions is, is, you know, why are you here, so to speak?
link |
00:20:10.720
And it's, are you going to hurt us? Right. But, but even that is, you know,
link |
00:20:14.640
it's a very unclear, you know, it's like, the, are you going to hurt us? That comes back to a
link |
00:20:18.480
lot of interesting AI ethics questions, because the, you know, we might make an AI that says,
link |
00:20:24.000
well, take autonomous cars, for instance, you know, are you going to hurt us? Well,
link |
00:20:28.480
let's make sure you only drive it precisely to the speed limit, because we want to make sure
link |
00:20:32.080
we don't hurt you, so to speak, because that's some, and then well, something, you know, but
link |
00:20:36.560
you say, but actually that means I'm going to be really late for this thing. And, you know,
link |
00:20:40.000
that sort of hurts me in some way. So it's hard to know even, even the definition of what it means
link |
00:20:46.160
to hurt someone is unclear. And as we start thinking about things about AI ethics and so on,
link |
00:20:54.240
that's, you know, something one has to address. There's always tradeoffs. And that's the annoying
link |
00:20:58.800
thing about ethics. Yeah, well, right. And I mean, I think ethics, like these other things
link |
00:21:03.200
we're talking about, is a deeply human thing. There's no abstract, you know, let's write down
link |
00:21:09.040
the theorem that proves that this is ethically correct. That's a, that's a meaningless idea.
link |
00:21:15.280
You know, you have to have a ground truth, so to speak, that's ultimately sort of what
link |
00:21:20.480
humans want. And they don't all want the same thing. So that gives one all kinds of additional
link |
00:21:25.440
complexity and thinking about that. One convenient thing in terms of turning ethics
link |
00:21:30.480
into computation, you can ask the question of what maximizes the likelihood of the survival of the
link |
00:21:36.640
species? Yeah, that's a good existential issue. But then when you say survival of the species,
link |
00:21:44.320
right, you might say, you might, for example, for example, let's say, forget about technology,
link |
00:21:51.280
just, you know, hang out and, you know, be happy, live our lives, go on to the next generation,
link |
00:21:56.800
you know, go through many, many generations, where in a sense, nothing is happening.
link |
00:22:01.680
That okay? Is that not okay? Hard to know. In terms of, you know, the, the attempt to do
link |
00:22:08.160
elaborate things and the attempt to might be counterproductive for the survival of the species.
link |
00:22:15.600
Like for instance, I mean, in, you know, I think it's, it's also a little bit hard to know. So,
link |
00:22:20.560
okay, let's take that as a, as a sort of thought experiment. Okay. You know, you can say, well,
link |
00:22:26.880
what are the threats that we might have to survive? You know, the super volcano, the
link |
00:22:31.120
asteroid impact, the, you know, all these kinds of things. Okay, so now we inventory these possible
link |
00:22:37.440
threats and we say, let's make our species as robust as possible relative to all these threats.
link |
00:22:42.800
I think in the end, it's a, it's sort of an unknowable thing, what, what it takes to, you know,
link |
00:22:50.240
so, so given that you've got this AI, and you've told it, maximize the long term, what does long
link |
00:22:57.680
term mean? Does long term mean until the sun burns out? That's, that's not going to work.
link |
00:23:02.720
And, you know, does long term mean next 1000 years? Okay, there are probably optimizations
link |
00:23:08.640
for the next 1000 years that are, it's like, it's like, if you're running a company, you
link |
00:23:13.520
can make a company be very stable for a certain period of time. Like if, you know, if your company
link |
00:23:18.400
gets bought by some, you know, private investment group, then they'll, you know, you can, you can
link |
00:23:24.640
run a company just fine for five years by just taking what it does and, you know, removing all
link |
00:23:30.000
R&D and the company will burn out after a while, but it'll run just fine for a while. So if you
link |
00:23:36.560
tell the AI, keep the humans okay for 1000 years, there's probably a certain set of things that one
link |
00:23:41.520
would do to optimize that, many of which one might say, well, that would be a pretty big shame for
link |
00:23:46.240
the future of history, so to speak, for that to be what happens. But I think, I think in the end,
link |
00:23:50.560
you know, as you start thinking about that question, it is what you realize is there's a whole sort of
link |
00:23:57.680
raft of undecidability, computational irreducibility. In other words, it's, I mean, one of the good
link |
00:24:03.680
things about sort of what our civilization has gone through and what sort of we humans go through
link |
00:24:11.440
is that there's a certain computational irreducibility to it in the sense that
link |
00:24:15.520
it isn't the case that you can look from the outside and just say, the answer is going to be
link |
00:24:19.280
this. At the end of the day, this is what's going to happen. You actually have to go through the
link |
00:24:23.600
process to find out. And I think that's, that's both, that feels better in the sense it's not a,
link |
00:24:29.920
you know, something is achieved by going through all of this, all of this process. And it's,
link |
00:24:38.000
but it also means that telling the AI, go figure out, you know, what will be the best outcome.
link |
00:24:44.000
Well, unfortunately, it's going to come back and say it's kind of undecidable what to do.
link |
00:24:48.160
We'd have to run all of those scenarios to see what happens. And if we want it for the infinite
link |
00:24:54.160
future, we're thrown immediately into sort of standard issues of kind of infinite computation
link |
00:25:00.240
and so on. So yeah, even if you get that the answer to the universe and everything is 42,
link |
00:25:04.960
you still have to actually run the universe. Yes. To figure out. Yes. Like the question,
link |
00:25:12.080
I guess, or the, you know, the journey is the point. Right. Well, I think it's saying, to
link |
00:25:18.800
summarize, this is the result of the universe. Yeah. That's, if that is possible, it tells us,
link |
00:25:25.360
I mean, the whole sort of structure of thinking about computation and so on, and thinking about
link |
00:25:30.240
how stuff works. If it's possible to say, and the answer is such and such, you're basically
link |
00:25:36.560
saying there's a way of going outside the universe. And you're kind of, you're getting yourself into
link |
00:25:41.040
something of a sort of paradox, because you're saying, if it's knowable, what the answer is,
link |
00:25:46.640
then there's a way to know it that is beyond what the universe provides. But if we can know it,
link |
00:25:52.560
then something that we're dealing with is beyond the universe. So then the universe isn't the
link |
00:25:58.160
universe, so to speak. So. And in general, as we'll talk about, at least for small human brains,
link |
00:26:06.640
it's hard to show that the result of a sufficiently complex computation. It's hard. I mean,
link |
00:26:14.320
it's probably impossible, right? Undissidibility. So, and the universe appears by at least the
link |
00:26:23.360
poets to be sufficiently complex, they won't be able to predict what the heck it's all going to.
link |
00:26:30.160
Well, we better not be able to, because if we can, it kind of denies, I mean, it's, you know,
link |
00:26:34.880
we're part of the universe. So what does it mean for us to predict? It means that we,
link |
00:26:40.560
that our little part of the universe is able to jump ahead of the whole universe. And this quickly
link |
00:26:47.120
winds up, I mean, that it is conceivable. The only way we'd be able to predict is if we are so
link |
00:26:53.280
special in the universe, we are the one place where there is computation more special, more
link |
00:26:58.800
sophisticated than anything else that exists in the universe. That's the only way we would
link |
00:27:02.480
have the ability to sort of the almost theological ability, so to speak, to predict
link |
00:27:09.280
what happens in the universe is to say somehow we're better than everything else in the universe,
link |
00:27:14.320
which I don't think is the case. Yeah, perhaps we can detect a large number of looping patterns
link |
00:27:22.320
that reoccur throughout the universe and fully describe them. And therefore, but then it's,
link |
00:27:28.240
it still becomes exceptionally difficult to see how those patterns interact and what kind of
link |
00:27:32.800
complexity measures. Well, look, the most remarkable thing about the universe is that
link |
00:27:37.040
it's has regularity at all. Might not be the case. If you would have regularity, do you absolutely
link |
00:27:43.440
that if it's full of, I mean, physics is successful, you know, it's full of, of laws that tell us a
link |
00:27:49.920
lot of detail about how the universe works. I mean, it could be the case that, you know,
link |
00:27:54.080
the 10 to the 90th particles in the universe, they will do their own thing, but they don't.
link |
00:27:58.080
They all follow, we already know, they all follow basically physical, the same physical laws.
link |
00:28:03.360
And that's something that's a very profound fact about the universe. What conclusion you draw from
link |
00:28:09.440
that is unclear. I mean, in the, you know, the early, early theologians, that was, you know,
link |
00:28:15.120
exhibit number one for the existence of God. Now, you know, people have different conclusions
link |
00:28:20.240
about it. But the fact is, you know, right now, I mean, I happen to be interested,
link |
00:28:24.800
actually, I've just restarted a long running kind of interest of mine about fundamental physics.
link |
00:28:31.120
I'm kind of like, I'm on, I'm on a bit of a quest, which I'm about to make more public
link |
00:28:37.440
to see if I can actually find the fundamental theory of physics.
link |
00:28:40.560
Excellent. We'll come to that. And I just had a lot of conversations with quantum mechanics
link |
00:28:47.440
folks with, so I'm really excited on your take, because I think you have a fascinating take on
link |
00:28:53.040
the, the, the fundamental nature of our reality from a physics perspective. So, and what might be
link |
00:29:00.880
underlying the kind of physics as we think of it today. Okay, let's take a step back.
link |
00:29:06.800
What is computation? It's a good question. Operationally, computation is following rules.
link |
00:29:15.120
That's kind of it. I mean, computation is the result is the process of systematically
link |
00:29:20.320
following rules. And it is the thing that happens when you do that.
link |
00:29:24.720
So taking initial conditions or taking inputs and following rules. I mean, what are you following
link |
00:29:30.720
rules on? So there has to be some data, some, not necessarily, it can be something where
link |
00:29:36.320
that there's a, you know, very simple input. And then you're following these rules. And you'd say
link |
00:29:42.080
there's not really much data going into this. It's you can actually pack the initial conditions
link |
00:29:47.040
into the rule, if you want to. So I think the question is, is there a robust notion of computation?
link |
00:29:54.240
That is, what does it mean? What I mean by that is something like this. So,
link |
00:29:57.520
so one of the things that are different in an area of physics, something like energy.
link |
00:30:02.400
Okay, they're different forms of energy. There's, but somehow energy is a robust concept that
link |
00:30:09.280
doesn't, isn't particular to kinetic energy or, you know, nuclear energy or whatever else,
link |
00:30:15.920
there's a robust idea of energy. So one of the things you might ask is, is there a robust idea
link |
00:30:20.800
of computation? Or does it matter that this computation is running in a Turing machine?
link |
00:30:25.440
This computation is running in a, you know, CMOS silicon CPU. This computation is running in a
link |
00:30:31.280
fluid system in the weather, those kinds of things. Or is there a robust idea of computation
link |
00:30:36.160
that transcends the sort of detailed framework that it's running in? Okay. And is there? Yes.
link |
00:30:43.440
I mean, it wasn't obvious that there was. So it's worth understanding the history
link |
00:30:47.920
and how we got to where we are right now. Because, you know, to say that there is,
link |
00:30:53.040
is a statement in part about our universe. It's not a statement about what is mathematically
link |
00:30:59.120
conceivable. It's about what actually can exist for us. Maybe you can also comment,
link |
00:31:04.960
because energy as a concept is robust. But there's also it's intricate, complicated relationship with
link |
00:31:16.080
matter with mass is very interesting of particles that carry force and particles
link |
00:31:25.280
that sort of particles that carry force and particles that have mass, these kinds of ideas,
link |
00:31:31.920
they seem to map to each other, at least in the mathematical sense. Is there a connection between
link |
00:31:38.160
energy and mass and computation? Or are these completely disjoint ideas?
link |
00:31:44.000
We don't know yet. The things that I'm trying to do about fundamental physics
link |
00:31:49.840
may well lead to such a connection, but there is no known connection at this time.
link |
00:31:54.720
So, can you elaborate a little bit more on what, how do you think about
link |
00:31:59.520
computation? What is computation? Yeah. So, I mean, let's, let's tell a little bit of a historical
link |
00:32:05.040
story. Okay. So, you know, back, go back 150 years, people were making mechanical
link |
00:32:12.160
calculators of various kinds. And, you know, the typical thing was you want an adding machine,
link |
00:32:16.480
you go to the adding machine store, basically, you want a multiplying machine, you go to the
link |
00:32:20.480
multiplying machine store, they're different pieces of hardware. And so that means that,
link |
00:32:25.440
at least at the level of that kind of computation, and those kinds of pieces of hardware,
link |
00:32:29.920
there isn't a robust notion of computation. There's the adding machine kind of computation,
link |
00:32:34.080
there's the multiplying machine comp notion of computation, and they're disjoint. So, what happened
link |
00:32:39.680
in around 1900, people started imagining, particularly in the context of mathematical logic,
link |
00:32:44.800
could you have something which would be represent any reasonable function, right? And they came up
link |
00:32:51.040
with things, this idea of primitive recursion was one of the early ideas. And it didn't work.
link |
00:32:56.320
There were reasonable functions that people could come up with that were not represented
link |
00:33:01.440
using the primitives of primitive recursion. Okay. So, then, then along comes 1931 and
link |
00:33:07.360
Goedl's theorem and so on. And as in looking back, one can see that as part of the process of
link |
00:33:14.720
establishing Goedl's theorem, Goedl basically showed how you could compile arithmetic,
link |
00:33:20.080
how you could basically compile logical statements like this statement is unprovable
link |
00:33:26.160
into arithmetic. So, what he essentially did was to show that arithmetic can be a computer,
link |
00:33:33.520
in a sense, that's capable of representing all kinds of other things. And then,
link |
00:33:37.600
Turing came along 1936, came up with Turing machines. Meanwhile, Alonzo Church had come
link |
00:33:42.640
up with lambda calculus. And the surprising thing that was established very quickly is
link |
00:33:47.360
the Turing machine idea about what computation might be is exactly the same as the lambda
link |
00:33:52.720
calculus idea of what computation might be. And so, and then there started to be other ideas,
link |
00:33:58.240
register machines, other kinds of representations of computation. And the big surprise was,
link |
00:34:04.800
they all turned out to be equivalent. So, in other words, it might have been the case,
link |
00:34:08.560
like those old adding machines and multiplying machines, that Turing had his idea of computation,
link |
00:34:13.360
Church had his idea of computation, and they were just different. But it isn't true. They're
link |
00:34:18.160
actually all equivalent. So then, by, I would say, the 1970s or so, in sort of the computer
link |
00:34:27.840
science, computation theory area, people had sort of said, oh, Turing machines are kind of what
link |
00:34:32.160
computation is. Physicists were still holding out saying, no, no, no, it's just not how the
link |
00:34:37.440
universe works. We've got all these differential equations. We've got all these real numbers
link |
00:34:41.760
that have infinite numbers of digits. The universe is not a Turing machine.
link |
00:34:45.200
Right. The Turing machines are a small subset of the things that we make in microprocessors
link |
00:34:52.400
and engineering structures and so on. So probably, actually, through my work in the 1980s about sort
link |
00:34:58.640
of the relationship between computation and models of physics, it became a little less clear
link |
00:35:06.080
that there would be, that there was this big sort of dichotomy between what can happen in physics
link |
00:35:13.040
and what happens in things like Turing machines. And I think probably by now,
link |
00:35:17.600
people would mostly think, and by the way, brains were another kind of element of this. I mean,
link |
00:35:23.040
you know, Goedl didn't think that his notion of computation or what amounted to his notion of
link |
00:35:27.520
computation would cover brains. And Turing wasn't sure either. But although he was a little bit,
link |
00:35:35.280
he got to be a little bit more convinced that it should cover brains. But so I would say by
link |
00:35:43.680
probably sometime in the 1980s, there was beginning to be sort of a general belief that, yes, this
link |
00:35:48.720
notion of computation that could be captured by things like Turing machines was reasonably robust.
link |
00:35:54.960
Now, the next question is, okay, you can have a universal Turing machine that's capable of
link |
00:36:01.840
being programmed to do anything that any Turing machine can do. And, you know, this idea of
link |
00:36:08.320
universal computation, it's an important idea, this idea that you can have one piece of hardware
link |
00:36:12.960
and program it with different pieces of software. You know, that's kind of the idea that launched
link |
00:36:17.760
most modern technology. I mean, that's kind of that that's the idea that launched computer
link |
00:36:21.600
evolution, software, etc. So important idea. But but the thing that's still kind of holding out
link |
00:36:28.240
from that idea is, okay, there is this universal computation thing, but seems hard to get to.
link |
00:36:35.120
It seems like you want to make a universal computer, you have to kind of have a microprocessor with,
link |
00:36:40.000
you know, a million gates in it, and you have to go to a lot of trouble to make something that
link |
00:36:44.800
achieves that level of computational sophistication. Okay, so the surprise for me was the stuff that
link |
00:36:51.680
I discovered in the early 80s, I'm looking at these things called cellular automata,
link |
00:36:56.960
which are really simple computational systems. The thing that was a big surprise to me was
link |
00:37:03.360
that even when their rules were very, very simple, they were doing things that were as
link |
00:37:07.200
sophisticated as they did when their rules were much more complicated. So it didn't look like,
link |
00:37:11.920
you know, this idea, oh, to get sophisticated computation, you have to build something with
link |
00:37:16.560
very sophisticated rules. That idea didn't seem to pan out. And instead, it seemed to be the case
link |
00:37:23.440
that sophisticated computation was completely ubiquitous, even in systems with incredibly
link |
00:37:27.920
simple rules. And so that led to this thing that I call the principle of computational equivalence,
link |
00:37:33.760
which basically says when you have a system that follows rules of any kind, then whenever the system
link |
00:37:42.320
isn't doing things that are in some sense, obviously simple, then the computation that
link |
00:37:48.400
the behavior of the system corresponds to is of equivalent sophistication. So that means that
link |
00:37:54.240
when you kind of go from the very, very, very simplest things you can imagine, then quite
link |
00:37:58.800
quickly you hit this kind of threshold above which everything is equivalent in its computational
link |
00:38:03.840
sophistication. Not obvious that would be the case. I mean, that's a science fact. Well,
link |
00:38:09.520
no, no, no, hold on a second. So this you've opened with a new kind of science. I mean,
link |
00:38:14.800
I remember it was a huge eye opener that such simple things can create such complexity. And
link |
00:38:22.320
yes, there's an equivalence, but it's not a fact. It just appears to, I mean, as much as a fact as
link |
00:38:28.240
sort of these theories are so elegant that it seems to be the way things are. But let me ask,
link |
00:38:38.320
sort of, you just brought up previously kind of like the communities of computer scientists
link |
00:38:44.400
with their Turing machines, the physicists with their universe, and the whoever the heck,
link |
00:38:50.400
maybe neuroscientists looking at the brain. What's your sense in the equivalence? So you've shown
link |
00:38:57.360
through your work that simple rules can create equivalently complex Turing machine systems,
link |
00:39:06.560
right? Is the universe equivalent to the kinds of Turing machines? Is the human brain
link |
00:39:17.520
a kind of Turing machine? Do you see those things basically blending together,
link |
00:39:21.600
or is there still a mystery about how disjoint they are?
link |
00:39:25.120
Well, my guess is that they all blend together. But we don't know that for sure yet. I mean,
link |
00:39:30.080
this, you know, I should say, I said rather glibly that the principle of computational
link |
00:39:35.360
equivalence is sort of a science fact. And I was using air quotes for the science fact.
link |
00:39:42.000
Because when you, it is a, I mean, just to talk about that for a second, then we'll,
link |
00:39:49.760
the thing is that it is, it has a complicated epistemological character, similar to things
link |
00:39:57.200
like the second law of thermodynamics, the law of entropy increase. The, you know,
link |
00:40:02.160
what is the second law of thermodynamics? Is it a law of nature? Is it a thing that is true
link |
00:40:07.120
of the physical world? Is it, is it something which is mathematically provable? Is it something
link |
00:40:12.240
which happens to be true of the systems that we see in the world? Is it in some sense a definition
link |
00:40:18.000
of heat, perhaps? Well, it's a combination of those things. And it's the same thing with the
link |
00:40:23.360
principle of computational equivalence. And in some sense, the principle of computational
link |
00:40:27.520
equivalence is at the heart of the definition of computation. Because it's telling you,
link |
00:40:32.160
there is a thing, there is a robust notion that is equivalent across all these systems,
link |
00:40:37.440
and doesn't depend on the details of each individual system. And that's why we can
link |
00:40:42.000
meaningfully talk about a thing called computation. And we're not stuck talking about,
link |
00:40:46.800
oh, there's computation in Turing machine number three, seven, eight, five,
link |
00:40:51.040
and et cetera, et cetera, et cetera. That's that's why there is a robust notion like that.
link |
00:40:55.440
Now, on the other hand, can we prove the principle of computational equivalence? Can we can we
link |
00:41:00.080
prove it as a mathematical result? Well, the answer is, actually, we've got some nice results
link |
00:41:05.520
along those lines that say, you know, throw me a random system with very simple rules. Well,
link |
00:41:11.200
in a couple of cases, we now know that even the very simplest rules we can imagine of a certain
link |
00:41:17.280
type are universal, and do sort of follow what you would expect from the principle of
link |
00:41:23.280
computational equivalence. So that's a nice piece of sort of mathematical evidence
link |
00:41:26.800
for the principle of computational equivalence.
link |
00:41:28.720
Just to look on that point, the simple rules creating sort of these
link |
00:41:34.080
complex behaviors. But is there a way to mathematically say that this behavior is complex
link |
00:41:44.240
that you've you mentioned that you cross a threshold?
link |
00:41:46.960
Right. So the various indicators. So for example, one thing would be,
link |
00:41:51.600
is it capable of universal computation? That is, given the system, do there exist
link |
00:41:56.880
initial conditions for the system that can be set up to essentially represent programs to do
link |
00:42:01.920
anything you want to compute primes to compute pi to do whatever you want? Right. So that's an
link |
00:42:06.800
indicator. So we know in a couple of examples that, yes, the simplest candidates that could
link |
00:42:14.160
conceivably have that property do have that property. And that's what the principle of
link |
00:42:17.920
computational equivalence might suggest. But this principle of computational equivalence,
link |
00:42:23.840
one question about it is, is it true for the physical world? Right. It might be true for
link |
00:42:29.120
all these things we come up with the Turing machines, the cellular automata, whatever else.
link |
00:42:34.080
Is it true for our actual physical world? Is it true for the brain brains, which are an element
link |
00:42:39.920
of the physical world? We don't know for sure. And that's not the type of question that we will
link |
00:42:45.440
have a definitive answer to, because there's a sort of scientific induction issue. You can say,
link |
00:42:51.680
well, it's true for all these brains. But this person over here is really special,
link |
00:42:55.360
and it's not true for them. And the only way that that cannot be what happens is,
link |
00:43:02.400
if we finally nail it and actually get a fundamental theory for physics, and it turns out to
link |
00:43:08.320
correspond to, let's say, a simple program, if that is the case, then we will basically have
link |
00:43:13.440
reduced physics to a branch of mathematics, in the sense that we will not be right now with
link |
00:43:18.720
physics. We're like, well, this is the theory that this is the rules that apply here. But in the
link |
00:43:24.400
middle of that, right by that black hole, maybe these rules don't apply and something else applies,
link |
00:43:31.520
and there may be another piece of the onion that we have to peel back. But if we can get to the
link |
00:43:36.640
point where we actually have, this is the fundamental theory of physics, here it is,
link |
00:43:41.520
it's this program, run this program, and you will get our universe, then we've kind of reduced the
link |
00:43:47.520
problem of figuring out things in physics to a problem of doing some what turns out to be very
link |
00:43:53.040
difficult, irreducibly difficult mathematical problems. But it no longer is the case that we
link |
00:43:58.560
can say that somebody can come in and say, whoops, you will write about all these things about
link |
00:44:03.200
Turing machines, but you're wrong about the physical universe. We know there's sort of
link |
00:44:07.360
ground truth about what's happening in the physical universe. Now, I happen to think,
link |
00:44:11.600
I mean, you asked me at an interesting time, because I'm just in the middle of starting to
link |
00:44:16.640
reenergize my project to kind of study the fundamental theory of physics.
link |
00:44:23.040
As of today, I'm very optimistic that we're actually going to find something and that it's
link |
00:44:27.760
going to be possible to see that the universe really is computational in that sense. But I
link |
00:44:32.560
don't know because we're betting against the universe, so to speak. And it's not like when
link |
00:44:40.240
I spend a lot of my life building technology, and then I know what's in there. And it may have
link |
00:44:46.720
unexpected behavior, it may have bugs, things like that. But fundamentally, I know what's in there.
link |
00:44:50.160
For the universe, I'm not in that position, so to speak.
link |
00:44:54.720
What kind of computation do you think the fundamental laws of physics might emerge from?
link |
00:45:01.440
So just to clarify, you've done a lot of fascinating work with kind of discrete
link |
00:45:08.480
kinds of computation that, you know, you could sell your automata, and we'll talk about it,
link |
00:45:14.720
have this very clean structure. It's such a nice way to demonstrate that simple rules
link |
00:45:20.320
can create immense complexity. But what kind, you know, is that actually,
link |
00:45:27.520
are cellular automata sufficiently general to describe the kinds of computation that might
link |
00:45:33.040
create the laws of physics? Just to give, can you give a sense of what kind of computation do you
link |
00:45:38.080
think would create? Well, so this is a slightly complicated issue, because as soon as you have
link |
00:45:44.160
universal computation, you can in principle simulate anything with anything. But it is not a
link |
00:45:49.600
natural thing to do. And if you're asking, were you to try to find our physical universe
link |
00:45:55.520
by looking at possible programs in the computational universe of all possible programs,
link |
00:46:00.240
would the ones that correspond to our universe be small and simple enough that we might find them
link |
00:46:06.240
by searching that computational universe? We've got to have the right basis, so to speak. We have
link |
00:46:10.640
got to have the right language in effect for describing computation for that to be feasible.
link |
00:46:15.760
So the thing that I've been interested in for a long time is, what are the most
link |
00:46:19.040
structural structures that we can create with computation? So in other words, if you say a
link |
00:46:24.160
cellular automaton, it has a bunch of cells that are arrayed on a grid, and every cell is updated
link |
00:46:31.040
in synchrony at a particular, when there's a click of a clock, so to speak, and it goes a tick
link |
00:46:37.440
of a clock, and every cell gets updated at the same time. That's a very specific, very rigid
link |
00:46:43.120
kind of thing. But my guess is that when we look at physics, and we look at things like space and
link |
00:46:49.200
time, that what's underneath space and time is something as structural as possible, that what
link |
00:46:55.440
we see, what emerges for us as physical space, for example, comes from something that is sort of
link |
00:47:01.920
arbitrarily unstructured underneath. And so I've been for a long time interested in kind of what
link |
00:47:08.320
are the most structural structures that we can set up. And actually, what I had thought about
link |
00:47:14.160
for ages is using graphs, networks, where essentially, so let's talk about space, for
link |
00:47:19.920
example. So what is space? As a kind of a question one might ask, back in the early days of quantum
link |
00:47:26.560
mechanics, for example, people said, Oh, for sure, space is going to be discreet, because all these
link |
00:47:31.280
other things we're finding are discreet. But that never worked out in physics. And so space and
link |
00:47:35.600
physics today is always treated as this continuous thing, just like Euclid imagined it. I mean,
link |
00:47:41.040
the very first thing Euclid says in his sort of common notions is, a point is something which
link |
00:47:46.480
has no part. In other words, there are points that are arbitrarily small. And there's a continuum
link |
00:47:52.560
of possible positions of points. And the question is, is that true? And so, for example, if we look
link |
00:47:57.920
at, I don't know, fluid like air or water, we might say, Oh, it's a continuous fluid, we can
link |
00:48:02.560
pour it, we can do all kinds of things continuously. But actually, we know, because we know the
link |
00:48:06.560
physics of it, that it consists of a bunch of discrete molecules bouncing around and only in
link |
00:48:10.560
the aggregate is it behaving like a continuum. And so the possibility exists that that's true
link |
00:48:16.400
of space too. People haven't managed to make that work with existing frameworks and physics.
link |
00:48:22.320
But I've been interested in whether one can imagine that underneath space and also underneath time
link |
00:48:28.640
is something more structuralist. And the question is, is it computational? So there are a couple
link |
00:48:34.560
of possibilities. It could be computational, somehow fundamentally equivalent to a Turing
link |
00:48:38.480
machine. Or it could be fundamentally not. So how could it not be? It could not be. So a Turing
link |
00:48:44.000
machine essentially deals with integers, whole numbers, some level. And, you know, it can do
link |
00:48:49.120
things like it can add one to a number, it can do things like this. It can also store whatever
link |
00:48:54.000
the heck it did. Yes, it can have an infinite storage. But what, when one thinks about doing
link |
00:49:01.280
physics or sort of idealized physics or idealized mathematics, one can deal with real numbers,
link |
00:49:08.320
numbers with an infinite number of digits, numbers which are absolutely precise. And
link |
00:49:12.960
one can say, we can take this number and we can multiply it by itself.
link |
00:49:16.240
Are you comfortable with infinity in this context? Are you comfortable in the context of
link |
00:49:21.040
computation? Do you think infinity plays a part? I think that the role of infinity is complicated.
link |
00:49:26.640
Infinity is useful in conceptualizing things. It's not actualizable. Almost by definition,
link |
00:49:34.080
it's not actualizable. But do you think infinity is part of the thing that might underlie the
link |
00:49:38.720
laws of physics? I think that, no. I think there are many questions that you might ask about
link |
00:49:45.440
physics which inevitably involve infinity. Like when you say, you know, is faster than
link |
00:49:49.760
light travel possible? You could say, given the laws of physics, can you make something?
link |
00:49:56.080
Even arbitrarily large, even quote, infinitely large, that will make faster than light travel
link |
00:50:02.400
possible. Then you're thrown into dealing with infinity as a kind of theoretical question.
link |
00:50:07.440
But I mean, talking about what's underneath space and time and how one can make a computational
link |
00:50:14.560
infrastructure, one possibility is that you can't make a computational infrastructure
link |
00:50:20.160
during machine sense. That you really have to be dealing with precise real numbers.
link |
00:50:25.440
You're dealing with partial differential equations which just have precise real numbers at arbitrarily
link |
00:50:31.280
closely separated points. You have a continuum for everything. Could be that that's what happens,
link |
00:50:36.960
that there's sort of a continuum for everything and precise real numbers for everything. And then
link |
00:50:40.720
the things I'm thinking about are wrong. And that's the risk you take if you're trying to
link |
00:50:47.680
sort of do things about nature is you might just be wrong. It's not, for me personally,
link |
00:50:53.280
it's kind of a strange thing because I've spent a lot of my life building technology
link |
00:50:57.440
where you can do something that nobody cares about, but you can't be sort of wrong in that sense,
link |
00:51:02.880
in the sense you build your technology and it does what it does. But I think this question of
link |
00:51:08.720
what the sort of underlying computational infrastructure for the universe might be,
link |
00:51:13.280
it's sort of inevitable it's going to be fairly abstract because if you're going to get all these
link |
00:51:20.400
things like there are three dimensions of space, there are electrons, there are muons, there are
link |
00:51:24.240
quarks, there are this, you don't get to, if the model for the universe is simple, you don't get
link |
00:51:30.320
to have sort of a line of code for each of those things. You don't get to have sort of the muon
link |
00:51:36.480
case, the tau lepton case and so on. All of those things have to be emergent somehow.
link |
00:51:40.720
Right. Something deeper. Right. So that means it's sort of inevitable that's a little hard to
link |
00:51:45.760
talk about what the sort of underlying structuralist structure actually is. Do you think our human
link |
00:51:52.480
beings have the cognitive capacity to understand, if we're to discover it, to understand the kinds
link |
00:51:58.320
of simple structure from which these laws can emerge? Do you think that's a hopeless pursuit?
link |
00:52:04.160
Well, here's what I think. I think that, I mean, I'm right in the middle of this right now. So I'm
link |
00:52:08.800
telling you that this human has a hard time understanding a bunch of the things
link |
00:52:15.680
that are going on. But what happens in understanding is one builds waypoints. I mean, if you said
link |
00:52:21.200
understand modern 21st century mathematics starting from counting back in whenever counting was
link |
00:52:29.600
invented 50,000 years ago, whatever it was, that will be really difficult. But what happens is we
link |
00:52:35.440
build waypoints that allow us to get to high levels of understanding. And we see the same thing
link |
00:52:40.080
happening in language. When we invent a word for something, it provides kind of a cognitive
link |
00:52:45.520
anchor, a kind of a waypoint that lets us like a podcast or something. You could be explaining,
link |
00:52:51.680
well, it's a thing which works this way, that way, the other way. But as soon as you have the word
link |
00:52:56.480
podcast, and people kind of societally understand it, you start to be able to build on top of that.
link |
00:53:02.320
And so I think, and that's kind of the story of science actually too. I mean, science is about
link |
00:53:06.800
building these kind of waypoints where we find this sort of cognitive mechanism for understanding
link |
00:53:12.880
something. Then we can build on top of it. We have the idea of, I don't know, differential
link |
00:53:17.120
equations, we can build on top of that. We have this idea, that idea. So my hope is that if it
link |
00:53:23.600
is the case that we have to go all the way sort of from the sand to the computer, and there's no
link |
00:53:29.200
waypoints in between, then we're toast. We won't be able to do that. Well, eventually we might.
link |
00:53:35.200
So if we're, as clever apes are good enough for building those abstractions, eventually from sand
link |
00:53:41.440
we'll get to the computer, right? And it's just might be a longer journey.
link |
00:53:44.640
The question is whether it is something that you ask whether our human brains
link |
00:53:49.360
will understand what's going on. And that's a different question, because for that,
link |
00:53:54.800
it requires steps that are sort of from which we can construct a human understandable narrative.
link |
00:54:02.320
And that's something that I think I am somewhat hopeful that that will be possible. Although,
link |
00:54:08.800
you know, as of literally today, if you ask me, I'm confronted with things that I don't understand
link |
00:54:14.080
very well. So this is a small pattern in a computation trying to understand the rules
link |
00:54:20.800
under which the computation functions. And it's an interesting possibility under which kinds of
link |
00:54:27.280
computations such a creature can't understand itself. My guess is that within, so we didn't
link |
00:54:34.080
talk much about computational irreducibility, but it's a consequence of this principle of
link |
00:54:37.840
computational equivalence. And it's sort of a core idea that one has to understand, I think,
link |
00:54:42.240
which is the question is, you're doing a computation, you can figure out what happens in
link |
00:54:47.120
the computation just by running every step in the computation and seeing what happens.
link |
00:54:51.360
Or you can say, let me jump ahead and figure out, you know, have something smarter that
link |
00:54:56.880
figures out what's going to happen before it actually happens. And a lot of traditional science
link |
00:55:02.320
has been about that act of computational reducibility. It's like, we've got these equations,
link |
00:55:08.720
and we can just solve them, and we can figure out what's going to happen. We don't have to trace
link |
00:55:12.400
all of those steps, we just jump ahead because we solved these equations. Okay, so one of the
link |
00:55:17.200
things that is a consequence of the principle of computational equivalence is you don't always get
link |
00:55:20.800
to do that. Many, many systems will be computationally irreducible, in the sense that the only way
link |
00:55:26.320
to find out what they do is just follow each step and see what happens. Why is that? Well,
link |
00:55:30.560
if you're saying, well, we, with our brains, we're a lot smarter, we don't have to mess around like
link |
00:55:36.320
the little cellular automaton going through and updating all those cells, we can just, you know,
link |
00:55:41.120
use the power of our brains to jump ahead. But if the principle of computational equivalence is
link |
00:55:46.080
right, that's not going to be correct, because it means that there's us doing our computation in
link |
00:55:52.400
our brains, there's a little cellular automaton doing its computation. And the principle of
link |
00:55:56.800
computational equivalence says these two computations are fundamentally equivalent. So that means
link |
00:56:02.320
we don't get to say we're a lot smarter than the cellular automaton and jump ahead,
link |
00:56:06.080
because we're just doing computation that's of the same sophistication as the cellular automaton
link |
00:56:10.640
itself. That's computation or reducibility. It's fascinating. But the, and that's a really
link |
00:56:15.680
powerful idea. I think that's both depressing and humbling and so on, that we're all, we in a
link |
00:56:22.960
cellular automaton are the same. But the question we're talking about the fundamental laws of physics
link |
00:56:28.080
is kind of the reverse question. You're not predicting what's going to happen. You have to
link |
00:56:32.560
run the universe for that. But saying, can I understand what rules likely generated me?
link |
00:56:38.160
I understand. But the problem is, to know whether you're right, you have to have some
link |
00:56:44.320
computational reducibility, because we are embedded in the universe. If the only way to
link |
00:56:48.720
know whether we get the universe is just to run the universe, we don't get to do that because it
link |
00:56:53.120
just ran for 14.6 billion years or whatever. And we don't, you know, we can't rerun it, so to speak.
link |
00:56:58.560
So we have to hope that there are pockets of computational reducibility sufficient to be
link |
00:57:04.240
able to say, yes, I can recognize those are electrons there. And I think that it is a feature
link |
00:57:10.560
of computational irreducibility. It's sort of a mathematical feature that there are always an
link |
00:57:14.800
infinite collection of pockets of reducibility. The question of whether they land in the right
link |
00:57:19.280
place and whether we can sort of build a theory based on them is unclear. But to this point about,
link |
00:57:24.640
you know, whether we as observers in the universe built out of the same stuff as the universe
link |
00:57:29.680
can figure out the universe, so to speak, that relies on these pockets of reducibility. Without
link |
00:57:35.840
the pockets of reducibility, it won't work, can't work. But I think this question about how observers
link |
00:57:41.680
operate, it's one of the features of science over the last 100 years, particularly, has been
link |
00:57:47.920
that every time we get more realistic about observers, we learn a bit more about science.
link |
00:57:52.960
So for example, relativity was all about observers don't get to say when, you know,
link |
00:57:58.960
what's simultaneous with what they have to just wait for the light signal to arrive to decide
link |
00:58:03.200
what's simultaneous. Or for example, in thermodynamics, observers don't get to say the
link |
00:58:08.960
position of every single molecule in a gas, they can only see the kind of large scale features,
link |
00:58:14.320
and that's why the second law of thermodynamics law of entropy increase and so on works. If you
link |
00:58:19.040
could see every individual molecule, you wouldn't conclude something about thermodynamics, you would
link |
00:58:25.920
conclude, oh, these molecules just all doing these particular things, you wouldn't be able to see this
link |
00:58:30.000
aggregate fact. So I strongly expect that, and in fact, in the theories that I have,
link |
00:58:35.840
that one has to be more realistic about the computation and other aspects of observers
link |
00:58:42.800
in order to actually make a correspondence between what we experience. In fact, they have a
link |
00:58:47.920
my little team and I have a little theory right now about how quantum mechanics may work, which is
link |
00:58:53.120
a very wonderfully bizarre idea about how sort of thread of human consciousness relates to
link |
00:59:01.200
what we observe in the universe, but this is the several steps to explain what that's about.
link |
00:59:06.000
What do you make of the mess of the observer at the lower level of quantum mechanics? Sort of the
link |
00:59:12.240
textbook definition with quantum mechanics kind of says that there's two worlds. One is the world
link |
00:59:22.240
that actually is, and the other is that's observed. What do you make sense of that?
link |
00:59:29.280
Well, I think actually the ideas we've recently had might actually give away into this,
link |
00:59:36.480
and that's, I don't know yet. I mean, it's, I think that's, it's a mess. I mean, the fact is,
link |
00:59:43.840
there is a, one of the things that's interesting, and when, you know, people look at these models
link |
00:59:49.600
that I started talking about 30 years ago now, they say, oh, no, that can't possibly be right.
link |
00:59:54.160
You know, what about quantum mechanics? Right? You say, okay, tell me what is the essence of
link |
00:59:59.520
quantum mechanics? What do you want me to be able to reproduce to know that I've got quantum
link |
01:00:03.440
mechanics, so to speak? Well, and that question comes up, comes up very operationally, actually,
link |
01:00:08.400
because we've been doing a bunch of stuff with quantum computing. And there are all these companies
link |
01:00:12.000
that say, we have a quantum computer, and we say, let's connect to your API, and let's actually run
link |
01:00:17.280
it. And they're like, well, maybe you shouldn't do that yet. We're not quite ready yet. And one of
link |
01:00:23.040
the questions that I've been curious about is, if I have five minutes with a quantum computer,
link |
01:00:27.600
how can I tell if it's really a quantum computer, or whether it's a simulator at the other end?
link |
01:00:32.000
Right? And turns out it's really hard. It turns out there isn't, it's, it's, it's like a lot of
link |
01:00:36.320
these questions about sort of what is intelligence, what's life. That's a scoring test for quantum
link |
01:00:41.440
computer. That's right. That's right. It's like, are you really a quantum computer? And I think
link |
01:00:45.920
the simulation, the, yes, exactly. Is it just a simulation? Or is it really a quantum computer?
link |
01:00:51.200
Same issue all over again. But that, so, you know, this, this whole issue about
link |
01:00:57.280
the sort of mathematical structure of quantum mechanics, and the completely separate
link |
01:01:03.120
thing that is our experience in which we think definite things happen, whereas quantum mechanics
link |
01:01:08.480
doesn't say definite things ever happen. Quantum mechanics is all about the amplitudes for
link |
01:01:12.400
different things to happen. But yet, our thread of consciousness operates as if definite things
link |
01:01:19.520
are happening. But to link on the point, you've kind of mentioned the structure that could
link |
01:01:27.040
underlie everything and this idea that it could perhaps have something like a structure of a
link |
01:01:32.720
graph. Can you elaborate why your intuition is that there's a graph structure of nodes and edges
link |
01:01:39.280
and what it might represent? Right. Okay. So the question is, what is, in a sense,
link |
01:01:45.920
the most structural structure you can imagine, right? So, and in fact, what I've recently
link |
01:01:53.680
realized in the last year or so, I have a new most structural structure.
link |
01:01:58.400
By the way, the question itself is a beautiful one and a powerful one in itself. So even without
link |
01:02:03.840
an answer, just the question is a really strong question. Right. Right. But what's your new idea?
link |
01:02:08.960
Well, it has to do with hypergraphs. Essentially, what, what is interesting about the sort of
link |
01:02:15.680
a model I have now is, is a little bit like what happened with computation. Everything that I think
link |
01:02:22.160
of as, oh, well, maybe the model is this, I discover it's equivalent. And that's quite
link |
01:02:28.480
encouraging, because it's like, I could say, well, I'm going to look at trivalent graphs with, you
link |
01:02:33.680
know, three edges for each node and so on. Or I could look at this special kind of graph,
link |
01:02:37.600
or I could look at this kind of algebraic structure. And turns out that the things I'm
link |
01:02:43.040
now looking at, everything that I've imagined that is a plausible type of structural structure
link |
01:02:49.120
is equivalent to this. So what is it? Well, a typical way to think about it is,
link |
01:02:56.160
well, so you might have some, some collection of tuples, collection of, of, let's say numbers.
link |
01:03:06.640
So you might have one, three, five, two, three, four, little, just collections of numbers,
link |
01:03:14.800
triples of numbers, let's say quadruples of numbers, pairs of numbers, whatever. And you
link |
01:03:19.600
have all these sort of floating little tuples, they're not in any particular order. And that
link |
01:03:27.360
sort of floating collection of tuples, and I told you this was abstract, represents the whole
link |
01:03:32.960
universe. The only thing that relates them is, when a symbol is the same, it's the same, so
link |
01:03:40.720
to speak. So if you have two tuples, and they contain the same symbol, let's say at the same
link |
01:03:45.440
position of the tuple, the first element of the tuple, then that's represents a relation.
link |
01:03:50.960
Okay, so let me, let me try and peel this back. Wow. Okay. So it's, it's, it's, I told you it's
link |
01:03:56.720
I told you it's abstract, but this is, this is the, this is so the relationship is formed by the same,
link |
01:04:02.240
some aspect of sameness. Right. But, but so think about it in terms of a graph. Yeah. So a graph,
link |
01:04:08.080
bunch of nodes, let's say you number each node. Okay. Then what is a graph? A graph is a set of
link |
01:04:14.400
pairs that say this node has an edge connecting it to this other node. So that's the, that's an
link |
01:04:21.040
a graph is just a collection of those pairs that say this node connects to this other node.
link |
01:04:28.400
So this is a generalization of that in which instead of having pairs, you have arbitrary
link |
01:04:34.000
end tuples. That's it. That's the whole story. And now the question is, okay, so that might be,
link |
01:04:41.600
that might represent the state of the universe. How does the universe evolve? What does the
link |
01:04:45.200
universe do? And so the answer is that what I'm looking at is transformation rules on these
link |
01:04:52.080
hypergraphs. In other words, you say this, whenever you see a, a piece of this hypergraph
link |
01:05:00.240
that looks like this, turn it into a piece of hypergraph that looks like this. So on a graph,
link |
01:05:06.080
it might be when you see the sub graph, when you see this thing with a bunch of edges hanging out
link |
01:05:10.000
in this particular way, then rewrite it as this other graph. Okay. And so that's the whole story.
link |
01:05:17.440
So the question is what, so now you say, I mean, think, as I say, this is quite abstract. And one
link |
01:05:25.520
of the questions is, where do you do those updating? So you've got this giant graph,
link |
01:05:30.880
what triggers the updating, like, what's the, what's the ripple effect of it? Is it,
link |
01:05:35.440
and I suspect everything's discreet even in time. So,
link |
01:05:41.840
okay. So the question is, where do you do the updates? Yes. And the answer is the rule is,
link |
01:05:45.920
you do them wherever they apply. And you do them, you do them, the order in which the updates is
link |
01:05:51.120
done is not defined. That is, that you can do them. So there may be many possible orderings
link |
01:05:56.400
for these updates. Now, the point is, if imagine you're an observer in this universe. So, and you
link |
01:06:02.960
say, did something get updated? Well, you don't, in any sense, know, until you yourself have been
link |
01:06:08.960
updated. Right. So in fact, all that you can be sensitive to is essentially the causal network
link |
01:06:17.120
of how an event over there affects an event that's in you. It doesn't even feel like observation. That's
link |
01:06:25.280
like, that's something else. You're just part of the whole thing. Yes, you're part of it. But even
link |
01:06:29.840
to have, so the end result of that is all you're sensitive to is this causal network of what event
link |
01:06:36.480
affects what other event. I'm not making a big statement about sort of the structure of the
link |
01:06:42.080
observer. I'm simply saying, I'm simply making the argument that what happens, the microscopic
link |
01:06:48.240
order of these rewrites is not something that any observer, any conceivable observer in this
link |
01:06:54.480
universe can be affected by. Because the only thing the observer can be affected by is this
link |
01:07:00.640
causal network of how the events in the observer are affected by other events that happen in the
link |
01:07:07.600
universe. So the only thing you have to look at is the causal network. You don't really have
link |
01:07:11.200
to look at this microscopic rewriting that's happening. So these rewrites are happening
link |
01:07:15.360
wherever they, they happen wherever they feel like.
link |
01:07:18.560
Causal network, is there, you said that there's not really, so the idea would be an undefined,
link |
01:07:26.400
like what gets updated, the sequence of things is undefined. It's a,
link |
01:07:32.640
that's what you mean by the causal network. But then the causal network is,
link |
01:07:36.800
given that an update has happened, that's an event. Then the question is, is that
link |
01:07:41.680
event causally related to, does that event, if that event didn't happen,
link |
01:07:45.680
then some future event couldn't happen yet. And so you build up this network of what affects what.
link |
01:07:53.600
And so what that does, so when you build up that network, that's kind of the observable
link |
01:07:59.280
aspect of the universe in some sense. And so then you can ask questions about,
link |
01:08:04.720
how robust is that observable network of what's happening in the universe?
link |
01:08:09.680
Okay, so here's where it starts getting kind of interesting. So for certain kinds of microscopic
link |
01:08:14.880
rewriting rules, the order of rewrites does not matter to the causal network. And so this is,
link |
01:08:21.360
okay, mathematical logic, moment, this is equivalent to the church Rossa property or the
link |
01:08:26.400
confluence property of rewrite rules. And it's the same reason that if you're simplifying an
link |
01:08:30.960
algebraic expression, for example, you can say, oh, let me expand those terms out, let me factor
link |
01:08:36.000
those pieces, doesn't matter what order you do that in, you'll always get the same answer.
link |
01:08:40.560
And that's, it's the same fundamental phenomenon that causes for certain kinds of microscopic
link |
01:08:46.400
rewrite rules that causes the causal network to be independent of the microscopic order of
link |
01:08:52.240
rewritings. Why is that property important? Because it implies special relativity.
link |
01:08:58.640
I mean, the reason what the reason it's important is that that property, special
link |
01:09:05.040
relativity says, you can look at these sort of, you can look at different reference frames,
link |
01:09:11.760
you can have different, you can be looking at your notion of what space and what's time
link |
01:09:16.400
can be different, depending on whether you're traveling at a certain speed, depending on
link |
01:09:19.760
whether you're doing this, that and the other. But nevertheless, the laws of physics are the
link |
01:09:23.600
same. That's what the principle of special relativity says is laws of physics are the
link |
01:09:27.840
same independent of your reference frame. Well, turns out this sort of change of the microscopic
link |
01:09:35.440
rewriting order is essentially equivalent to a change of reference frame, or that at least
link |
01:09:39.440
there's a sub part of how that works, that's equivalent to change of reference frame. So,
link |
01:09:44.160
somewhat surprisingly, and sort of for the first time in forever, it's possible for an
link |
01:09:49.440
underlying microscopic theory to imply special relativity, to be able to derive it. It's not
link |
01:09:54.720
something you put in as a, this is a, it's something where this other property, causal
link |
01:10:00.640
invariance, which is also the property that implies that there's a single thread of time
link |
01:10:06.000
in the universe. It might not be the case that that's that is the, that's what would lead to
link |
01:10:12.400
the possibility of an observer thinking that definite stuff happens. Otherwise, you've got
link |
01:10:17.280
all these possible rewriting orders, and who's to say which one occurred. But with this causal
link |
01:10:21.920
invariance property, there's a, there's a notion of a definite thread of time.
link |
01:10:25.360
It sounds like that that kind of idea of time, even space would be emergent from the system.
link |
01:10:30.800
Oh, yeah. No, I mean, it's not a fundamental part of the system.
link |
01:10:33.360
No, no, a fundamental level, all you've got is a bunch of nodes connected by hyper edges or
link |
01:10:38.640
whatever. So there's no time, there's no space. That's right. And but, but the, the thing is that
link |
01:10:43.440
it's just like imagining, imagine you're just dealing with a graph, and imagine you have something
link |
01:10:47.920
like a, you know, like a honeycomb graph where you have a bunch of hexagons, you know, that graph
link |
01:10:53.680
at a microscopic level, it's just a bunch of nodes connected to other nodes. But at a macroscopic
link |
01:10:58.160
level, you say that looks like a honeycomb, you know, this lattice, it looks like a two dimensional,
link |
01:11:03.840
you know, manifold of some kind, it looks like a two dimensional thing. If you connect it
link |
01:11:08.400
differently, if you just connect all the nodes one, one to another, and kind of a sort of linked
link |
01:11:12.480
list type structure, then you'd say, well, that looks like a one dimensional space.
link |
01:11:17.120
But at the microscopic level, all these are just networks with nodes, the macroscopic level,
link |
01:11:22.320
they look like something that's like one of our sort of familiar kinds of space.
link |
01:11:26.560
And it's the same thing with these hypergraphs. Now, if you ask me, have I found one that gives
link |
01:11:31.360
me three dimensional space, the answer is not yet. So we don't know, you know, this is one of these
link |
01:11:36.480
things, we're kind of betting against nature, so to speak. And I have no way to know. So there
link |
01:11:41.920
are many other properties of this, this kind of system that have are very beautiful, actually, and
link |
01:11:47.200
very suggestive. And it will be very elegant if this turns out to be right, because it's very,
link |
01:11:52.320
it's very clean. I mean, you start with nothing, and everything gets built up, everything about
link |
01:11:57.200
space, everything about time, everything about matter, it's all just emergent from the properties
link |
01:12:03.440
of this extremely low level system. And that, that will be pretty cool if that's the way our
link |
01:12:08.000
universe works. Now, do I, on the other hand, the thing that, that I find very confusing is,
link |
01:12:15.840
let's say we succeed, let's say we can say, this particular sort of hypergraph rewriting rule
link |
01:12:24.240
gives the universe, just run that hypergraph rewriting rule for enough times, and you'll get
link |
01:12:28.720
everything, you'll get this conversation we're having, you'll get everything. It's that,
link |
01:12:34.080
if we get to that point, and we look at what is this thing, what is this rule that we just have,
link |
01:12:40.800
that is giving us our whole universe, how do we think about that thing? Let's say,
link |
01:12:44.880
turns out the minimal version of this, and this is kind of cool thing for a language designer like
link |
01:12:49.840
me, the minimal version of this model is actually a single line of orphan language code. So that's,
link |
01:12:56.000
which I wasn't sure was going to happen that way. But it's, it's a, that's, it's kind of,
link |
01:13:01.840
it's kind of, no, we don't know what, we don't know what, that's, that's just the framework to
link |
01:13:07.760
know the actual particular hypergraph that might be a longer, the specification of the rules might
link |
01:13:13.120
be slightly longer. How does that help you accept marveling in the beauty and the elegance of the
link |
01:13:19.120
simplicity that creates the universe? That does that help us predict anything? Not really, because
link |
01:13:24.000
of the irreducibility. That's correct. That's correct. But so the thing that is really strange
link |
01:13:28.960
to me, and I haven't wrapped my, my brain around this yet, is, you know, one is, one keeps on
link |
01:13:36.240
realizing that we're not special in the sense that, you know, we don't live at the center of the
link |
01:13:41.280
universe, we don't blah, blah, blah. And yet, if we produce a rule for the universe, and it's
link |
01:13:48.320
quite simple, and we can write it down and couple of lines or something, that feels very special.
link |
01:13:54.480
How do we come to get a simple universe when many of the available universes, so to speak,
link |
01:14:00.480
are incredibly complicated? It might be, you know, a quintillion characters long.
link |
01:14:05.280
Why did we get one of the ones that's simple? And so I haven't wrapped my brain around that as
link |
01:14:09.520
you yet. If indeed we are in such a simple, the universe is such a simple rule, is it possible
link |
01:14:17.040
that there is something outside of this that we are in a kind of what people call the simulation,
link |
01:14:24.480
right? That we're just part of a computation that's being explored by a graduate student
link |
01:14:29.440
in an alternate universe. Well, you know, the problem is, we don't get to say much about what's
link |
01:14:34.560
outside our universe, because by definition, our universe is what we exist within. Now,
link |
01:14:40.080
can we make a sort of almost theological conclusion from being able to know how our
link |
01:14:45.360
particular universe works? Interesting question. I don't think that if you ask the question,
link |
01:14:52.080
could we, and it relates again to this question about extraterrestrial intelligence, you know,
link |
01:14:57.600
we've got the rule for the universe. Was it built on purpose? Hard to say. That's the same thing as
link |
01:15:03.680
saying we see a signal from, you know, that we're, you know, receiving from some, you know, random
link |
01:15:10.000
star somewhere, and it's a series of pulses. And, you know, it's a periodic series of pulses,
link |
01:15:15.760
let's say. Was that done on purpose? Can we conclude something about the origin of that
link |
01:15:20.000
series of pulses? Just because it's elegant does not necessarily mean that somebody created it,
link |
01:15:27.360
or that we can even comprehend what we created. Yeah, I think it's the ultimate version of the
link |
01:15:33.360
sort of identification of the techno signature question. The ultimate version of that is,
link |
01:15:38.640
was our universe a piece of technology, so to speak? And how on earth would we know? Because,
link |
01:15:44.320
but I mean, it'll be, it's, I mean, you know, in the kind of crazy science fiction thing you
link |
01:15:49.120
could imagine, you could say, oh, somebody's going to have, you know, there's going to be a
link |
01:15:53.440
signature there, it's going to be, you know, made by so and so. But there's no way we could
link |
01:15:58.720
understand that, so to speak. And it's not clear what that would mean. Because the universe simply,
link |
01:16:04.240
you know, this, if we find a rule for the universe, we're not, we're simply saying that rule represents
link |
01:16:11.600
what our universe does. We're not saying that that rule is something running on a big computer
link |
01:16:17.760
and making our universe. It's just saying that represents what our universe does in the same
link |
01:16:22.640
sense that, you know, laws of classical mechanics, differential equations, whatever they are,
link |
01:16:27.520
represent what mechanical systems do. It's not that the mechanical systems are somehow
link |
01:16:33.120
running solutions to those differential equations. Those differential equations are just representing
link |
01:16:37.840
the behavior of those systems. So what's the gap in your sense to linger on the fascinating,
link |
01:16:43.680
perhaps slightly sci fi question? What's the gap between understanding the fundamental rules that
link |
01:16:49.760
create a universe and engineering a system, actually creating a simulation ourselves? So
link |
01:16:55.840
you've talked about sort of, you've talked about, you know, nano engineering kind of ideas that
link |
01:17:02.240
are kind of exciting, actually creating some ideas of computation in the physical space.
link |
01:17:06.640
How hard is it as an engineering problem to create the universe once you know the rules
link |
01:17:11.360
that create it? Well, that's an interesting question. I think the substrate on which the
link |
01:17:16.080
universe is operating is not a substrate that we have access to. I mean, the only substrate we have
link |
01:17:21.600
is that same substrate that the universe is operating in. So if the universe is a bunch of
link |
01:17:26.320
hypergraphs being rewritten, then we get to attach ourselves to those same hypergraphs being rewritten.
link |
01:17:32.880
We don't get to, and if you ask the question, you know, is the code clean, you know, is,
link |
01:17:39.600
you know, can we write nice, elegant code with efficient algorithms and so on? Well,
link |
01:17:45.040
that's an interesting question. How, you know, that's this question of how much computational
link |
01:17:49.360
reducibility there is in the system. But so I've seen some beautiful cellular
link |
01:17:53.280
automata that basically create copies of itself within itself, right? So that's the question
link |
01:17:57.920
whether it's possible to create, like, whether you need to understand the substrate or whether you
link |
01:18:03.280
can just... Yeah, well, right. I mean, so one of the things that is sort of one of my slightly
link |
01:18:08.480
sci fi thoughts about the future, so to speak, is, you know, right now, if you poll typical people
link |
01:18:14.480
who say, do you think it's important to find the fundamental theory of physics? You get,
link |
01:18:19.360
because I've done this poll informally, at least. It's curious, actually. You get a
link |
01:18:24.000
decent fraction of people saying, oh, yeah, that would be pretty interesting.
link |
01:18:27.680
I think that's becoming surprisingly enough more, I mean, a lot of people are interested
link |
01:18:35.200
in physics in a way that, like, without understanding it, just kind of watching
link |
01:18:41.280
scientists, a very small number of them struggle to understand the nature of our reality.
link |
01:18:46.080
Right. I mean, I think that's somewhat true. And in fact, in this project that I'm launching into
link |
01:18:51.600
to try and find fundamental theory of physics, I'm going to do it as a very public project.
link |
01:18:55.920
I mean, it's going to be live streamed and all this kind of stuff. And I don't know what will
link |
01:18:59.680
happen. It'll be kind of fun. I mean, I think that it's the interface to the world of this project.
link |
01:19:06.960
I mean, I figure one feature of this project is, you know, unlike technology projects that
link |
01:19:13.360
basically are what they are, this is a project that might simply fail because it might be the case
link |
01:19:17.840
that it generates all kinds of elegant mathematics that has absolutely nothing to do with the physical
link |
01:19:21.920
universe that we happen to live in. Well, okay, so we're talking about kind of the quest to find
link |
01:19:27.680
the fundamental theory of physics. First point is, you know, it's turned out it's kind of hard to
link |
01:19:33.600
find the fundamental theory of physics. People weren't sure that that would be the case. Back
link |
01:19:38.000
in the early days of applying mathematics to science, 1600s and so on, people were like, oh,
link |
01:19:44.240
in 100 years, we'll know everything there is to know about how the universe works,
link |
01:19:47.840
turned out to be harder than that. And people got kind of humble at some level,
link |
01:19:51.680
because every time we got to sort of a greater level of smallness in studying the universe,
link |
01:19:56.000
it seemed like the math got more complicated and everything got got harder. The, you know,
link |
01:20:02.000
when I, when I was a kid, basically, I started doing particle physics. And, you know, when I was
link |
01:20:08.080
doing particle physics, I always thought finding the fundamental, fundamental theory of physics,
link |
01:20:14.000
that's a kooky business, we'll never be able to do that. But we can operate within these frameworks
link |
01:20:19.440
that we built for doing quantum field theory and general relativity and things like this.
link |
01:20:23.360
And it's all good. And we can figure out a lot of stuff. Did you even at that time have a sense
link |
01:20:27.920
that there's something behind that too? Sure. I just didn't expect that. I thought in some rather
link |
01:20:34.400
un, it's actually kind of crazy and thinking back on it, because it's kind of like there was this
link |
01:20:40.960
long period in civilization where people thought the ancients had it all figured out and we'll
link |
01:20:44.240
never figure out anything new. And to some extent, that's the way I felt about physics,
link |
01:20:49.360
when I was in the middle of doing it, so to speak, was, you know, we've got quantum field
link |
01:20:54.240
theory, it's the foundation of what we're doing. And there's, you know, yes, there's probably
link |
01:20:58.640
something underneath this, but we'll sort of never figure it out. But then I started studying
link |
01:21:04.880
simple programs in the computational universe, things like cellular automata and so on. And I
link |
01:21:10.080
discovered that there's they do all kinds of things that were completely at odds with the
link |
01:21:15.200
intuition that I had had. And so after that, after you see this tiny little program that does all
link |
01:21:21.120
this amazingly complicated stuff, then you start feeling a bit more ambitious about physics and
link |
01:21:26.320
saying maybe we could do this for physics too. And so that's, that got me started years ago now,
link |
01:21:32.640
and this kind of idea of could we actually find what's underneath all of these frameworks like
link |
01:21:39.520
quantum field theory and general relativity and so on. And people perhaps don't realize as clearly
link |
01:21:43.360
as they might that, you know, the frameworks we're using for physics, which is basically these two
link |
01:21:47.600
things quantum field theory, sort of the theory of small stuff and general relativity theory of
link |
01:21:54.080
gravitation and large stuff, those are the two basic theories. And they're 100 years old. I mean,
link |
01:21:59.120
general relativity was 1915, quantum field theory, well 1920s. So basically 100 years old. And they've,
link |
01:22:06.640
they've, it's been a good run. There's a lot of stuff been figured out. But what's interesting is
link |
01:22:13.200
the foundations haven't changed in all that period of time, even though the foundations had changed
link |
01:22:18.000
several times before that in the 200 years earlier than that. And I think the kinds of things that
link |
01:22:24.560
I'm thinking about, which is sort of really informed by thinking about computation and the
link |
01:22:28.240
computational universe, it's a different foundation, it's a different set of foundations,
link |
01:22:33.360
and might be wrong. But it is at least, you know, we have a shot. And I think it's, you know, to me,
link |
01:22:40.080
it's, you know, my personal calculation for myself is, is, you know, if it turns out that the
link |
01:22:48.320
finding the fundamental theory of physics, it's kind of low hanging fruit, so to speak,
link |
01:22:52.400
it'd be a shame if we just didn't think to do it. You know, people just said, oh, you'll never
link |
01:22:57.520
figure that stuff out. Let's, you know, and it takes another 200 years before anybody gets around
link |
01:23:02.880
to doing it. You know, I think it's, I don't know how low hanging this fruit actually is. It may be
link |
01:23:09.920
you know, it may be that it's kind of the wrong century to do this project. I mean, I think the,
link |
01:23:16.320
the, the cautionary tale for me, you know, I think about things that I've tried to do in technology
link |
01:23:21.520
where people thought about doing them a lot earlier. I mean, my, my favorite example is
link |
01:23:26.560
probably Leibniz who, who thought about making essentially encapsulating the world's knowledge
link |
01:23:32.080
in a computational form in the late 1600s, and did a lot of things towards that. And basically,
link |
01:23:39.520
you know, we finally managed to do this, but he was 300 years too early. And that's the,
link |
01:23:44.240
that's kind of the, in terms of life planning, it's kind of like,
link |
01:23:48.080
avoid things that can't be done in your, in your century, so to speak.
link |
01:23:51.920
Yeah, timing, timing is everything. So you think if we kind of figure out the underlying rules it
link |
01:24:00.720
can create from which quantum field theory and space general relativity can emerge,
link |
01:24:06.400
do you think they'll help us unify it at that level of abstraction?
link |
01:24:09.120
Oh, we'll know it completely. We'll know how that all fits together. Yes, without a question.
link |
01:24:13.680
And I mean, it's already, even the things I've already done, there are very, you know, it's,
link |
01:24:21.040
it's very, very elegant, actually, how, how things seem to be fitting together. Now, you know,
link |
01:24:25.440
is it right? I don't know yet. It's awfully suggestive. If it isn't right, it's then
link |
01:24:32.480
the designer of the universe should feel embarrassed, so to speak, because it's a really
link |
01:24:35.600
good way to do it. And your intuition in terms of designing the universe, does God play dice?
link |
01:24:41.360
Is there, is there randomness in this thing? Or is it deterministic? So the kind of...
link |
01:24:46.880
That's a little bit of a complicated question, because when you're dealing with these things
link |
01:24:51.040
that involve these rewrites that have, okay, even randomness is an emergent phenomenon, perhaps.
link |
01:24:56.160
Yes, yes. I mean, it's, yeah, well, randomness, in many of these systems, pseudo randomness and
link |
01:25:02.240
randomness are hard to distinguish. In this particular case, the current idea that we have
link |
01:25:07.760
about measurement and quantum mechanics is something very bizarre and very abstract.
link |
01:25:14.880
And I don't think I can yet explain it without kind of yacking about very technical things.
link |
01:25:21.280
Eventually, I will be able to. But if that's, if that's right, it's kind of a, it's a weird thing,
link |
01:25:27.600
because it slices between determinism and randomness in a weird way that hasn't been
link |
01:25:33.520
sliced before, so to speak. So like many of these questions that come up in science,
link |
01:25:38.080
where it's like, is it this or is it that? Turns out the real answer is it's neither of those
link |
01:25:42.720
things, it's something kind of different and sort of orthogonal to those, those, those categories.
link |
01:25:48.800
And so that's the current, you know, this week's idea about how that might work.
link |
01:25:53.200
But, you know, we'll, we'll see how that unfolds. I mean, there's, there's this question about a
link |
01:26:00.800
field like physics and sort of the quest for a fundamental theory and so on. And there's both
link |
01:26:06.400
the science of what happens and there's the, the sort of the social aspect of what happens,
link |
01:26:11.360
because, you know, in a field that is basically as old as physics, we're at, I don't know what it is,
link |
01:26:18.000
fourth generation, I don't know, fifth generation, I don't know what generation it is of physicists.
link |
01:26:22.480
And like, I was one of these, so to speak. And for me, the foundations were like the pyramids,
link |
01:26:27.840
so to speak, you know, it was that way and it was always that way. It is difficult in an old field
link |
01:26:34.560
to go back to the foundations and think about rewriting them. It's a lot easier in young fields
link |
01:26:39.760
where you're still dealing with the first generation of people who invented the field.
link |
01:26:44.880
And it tends to be the case, you know, that the nature of what happens in science tends to be,
link |
01:26:49.600
you know, you'll get, typically the pattern is some methodological advance occurs. And then
link |
01:26:55.600
there's a period of five years, 10 years, maybe a little bit longer than that, where there's lots
link |
01:27:00.240
of things that are now made possible by that, by that methodological advance, whether it's, you
link |
01:27:04.640
know, I don't know, telescopes or whether that's some mathematical method or something. It's, you
link |
01:27:10.560
know, there's a something, something happens, a tool gets built, and then you can do a bunch of
link |
01:27:17.680
stuff. And there's a bunch of low hanging fruit to be picked. And that takes a certain amount of
link |
01:27:23.040
time. After that, all that low hanging fruit is picked, then it's a hard slog for the next however
link |
01:27:29.760
many decades, or century or more, to get to the next sort of level at which one can do something.
link |
01:27:36.480
And it's kind of a, and it tends to be the case that in fields that are in that kind of, I wouldn't
link |
01:27:41.600
say cruise mode, because it's really hard work, but it's very hard work for very incremental progress.
link |
01:27:48.080
And in your career, and some of the things you've taken on, it feels like you're not,
link |
01:27:52.240
you haven't been afraid of the hard slog. Yeah, that's true.
link |
01:27:56.320
So it's quite interesting, especially on the engineering, on the engineering side.
link |
01:28:01.360
And a small tangent, when you were at Caltech, did you get to interact with Richard Feynman
link |
01:28:08.400
at all? Do you have any memories of Richard? We worked together quite a bit, actually.
link |
01:28:13.440
In fact, on, and in fact, both when I was at Caltech and after I left Caltech,
link |
01:28:18.240
we were both consultants at this company called Thinking Machines Corporation,
link |
01:28:22.080
which was just down the street from here, actually, as ultimately ill fated company. But
link |
01:28:27.600
I used to say this company is not going to work with the strategy they have. And Dick Feynman
link |
01:28:31.920
always used to say, what do we know about running companies? Just let them run their company.
link |
01:28:36.000
But anyway, he was not into that kind of thing. And he always thought,
link |
01:28:42.640
he always thought that my interest in doing things like running companies was a distraction,
link |
01:28:47.040
so to speak. And for me, it's a mechanism to have a more effective machine for actually
link |
01:28:56.560
getting things, figuring things out and getting things to happen.
link |
01:28:59.440
Did he think of it, because essentially what you used, you did with the company,
link |
01:29:03.440
I don't know if you were thinking of it that way, but you're creating tools to empower your,
link |
01:29:09.840
to empower the exploration of the university. Do you think he,
link |
01:29:14.960
did he understand that point? The point of tools of, I think not as well as he might have done.
link |
01:29:20.480
I mean, I think that, but, you know, he was actually my first company, which was also
link |
01:29:26.640
involved with, well, was involved with more mathematical computation kinds of things. You know,
link |
01:29:33.200
he was quite, he had lots of advice about the technical side of what we should do and so on.
link |
01:29:39.200
Do you have examples of memories or thoughts that?
link |
01:29:41.280
Oh, yeah, yeah, he had all kinds of, look, in the business of doing sort of, you know,
link |
01:29:46.800
one of the hard things in math is doing integrals and so on, right? And so he had his own elaborate
link |
01:29:51.520
ways to do integrals and so on. He had his own ways of thinking about sort of getting intuition
link |
01:29:56.080
about how math works. And so his sort of meta idea was take those intuitional methods and
link |
01:30:03.520
make a computer follow those intuitional methods. Now, it turns out, for the most part, like when
link |
01:30:09.280
we do integrals and things, what we do is, is we build this kind of bizarre industrial machine
link |
01:30:14.800
that turns every integral into, you know, products of mere g functions and generates this very
link |
01:30:20.320
elaborate thing. And actually, the big problem is turning the results into something a human
link |
01:30:24.400
will understand. It's not quotes doing the integral. And actually, Feynman did understand
link |
01:30:29.360
that to some extent. And I'm embarrassed to say he once gave me this big pile of, you know,
link |
01:30:34.880
calculational methods for particle physics that he worked out in the 50s. And he said,
link |
01:30:38.640
you know, it's more used to you than to me type thing. And I was like, I've intended to look at
link |
01:30:43.040
and give it back. And I still have my files now. So it's, but that's what happens when it's finiteness
link |
01:30:49.680
of human lives. It's, um, I, you know, maybe if he'd live another 20 years, I would have, I would
link |
01:30:54.240
have remembered to give it back. But I think, um, uh, it's, you know, that, that was his attempt to
link |
01:30:59.680
systematize, um, the ways that one does integrals that show up in particle physics and so on.
link |
01:31:05.840
Turns out the way we've actually done it is very different from that way.
link |
01:31:09.600
What do you make of that difference to chain? So Feynman was actually quite remarkable at
link |
01:31:14.720
creating sort of intuitive, like diving in, you know, creating intuitive frameworks for
link |
01:31:20.400
understanding difficult concepts is, I'm smiling because, you know, the funny thing about him was
link |
01:31:27.040
that the thing he was really, really, really good at is calculating stuff. And, but he thought that
link |
01:31:32.400
was easy because he was really good at it. And so he would do these things where he would calculate
link |
01:31:38.240
some, uh, do some complicated calculation in quantum field theory, for example, come out with
link |
01:31:43.760
the results. Wouldn't tell anybody about the complicated calculation because he thought that
link |
01:31:47.440
was easy. He thought the really impressive thing was to have this simple intuition about how
link |
01:31:52.320
everything works. So he invented that at the end. And, you know, because he'd done this calculation
link |
01:31:58.000
and knew what, how it worked, it was a lot easier. It's a lot easier to have good intuition when
link |
01:32:02.640
you know what the answer is. And, and then, and then he would just not tell anybody about these
link |
01:32:06.960
calculations. And he wasn't meaning that maliciously, so to speak. It's just, he thought that was easy.
link |
01:32:11.920
Yeah. And, and that's, you know, that led to areas where people were just completely mystified
link |
01:32:17.120
and they kind of followed his intuition, but nobody could tell why it worked. Because actually,
link |
01:32:22.000
the reason it worked was because he'd done all these calculations and he knew that it would work.
link |
01:32:25.920
And, you know, when I, he and I worked a bit on quantum computers actually back in 1980, 81,
link |
01:32:32.800
before anybody had heard of those things. And, you know, the typical mode of, I mean,
link |
01:32:38.800
he always used to say, and I now think about this because I'm about the age that he was when I
link |
01:32:42.960
worked with him. And, you know, I see the people who are one third my age, so to speak.
link |
01:32:47.520
And he was always complaining that I was one third his age and therefore
link |
01:32:52.000
various things. But, but, you know, he would do some calculation by, by hand, you know,
link |
01:32:57.760
blackboard and things come up with some answer. I'd say, I don't understand this.
link |
01:33:02.800
You know, I do something with a computer. And he'd say, you know, I don't understand this.
link |
01:33:08.000
So, it'd be some big argument about what was, you know, what was going on. But, but it was always
link |
01:33:14.400
and I think actually, many of the things that we sort of realized about quantum computing
link |
01:33:21.440
that were sort of issues that have to do, particularly with the measurement process
link |
01:33:25.040
are kind of still issues today. And I kind of find it interesting. It's a funny thing in science
link |
01:33:30.240
that these, you know, that there's, there's a remarkable, it happens in technology too,
link |
01:33:35.120
there's a remarkable sort of repetition of history that ends up occurring.
link |
01:33:40.080
Eventually, things really get nailed down. But it often takes a while and it often things come
link |
01:33:45.120
back decades later. Well, for example, I could tell a story actually happened right down the
link |
01:33:50.880
street from here. When we were both thinking machines, I had been working on this particular
link |
01:33:56.880
cellular automaton called Rule 30 that has this feature that it from very simple initial conditions,
link |
01:34:03.200
it makes really complicated behavior. Okay. So, and actually, of all silly physical things,
link |
01:34:11.200
using this big parallel computer called a connection machine that that company was making,
link |
01:34:16.880
I generated this giant printout of Rule 30 on very, on actually on the same kind of,
link |
01:34:22.320
same kind of printer that people use to make layouts for microprocessors. So, one of these big,
link |
01:34:30.080
you know, large format printers with high resolution and so on. So, okay, so print this out,
link |
01:34:36.000
lots of very tiny cells. And so there was sort of a question of how some features of that pattern.
link |
01:34:44.080
And so it was very much a physical, you know, on the floor with meter rules trying to measure
link |
01:34:48.960
different things. So, so Feynman kind of takes me aside, we've been doing that for a little while
link |
01:34:54.560
and takes me aside. And he says, I just want to know this one thing. He says, I want to know,
link |
01:34:59.040
how did you know that this Rule 30 thing would produce all this really complicated behavior
link |
01:35:04.160
that is so complicated that we're, you know, going around with this big printout and so on?
link |
01:35:09.040
And I said, Well, I didn't know, I just enumerated all the possible rules and then
link |
01:35:14.400
observed that that's what happened. He said, Oh, I feel a lot better. You know, I thought you had
link |
01:35:19.520
some intuition that he didn't have that would let when I said no, no, no, no, no intuition,
link |
01:35:24.960
just experimental science. So that's such a beautiful sort of dichotomy there.
link |
01:35:30.960
Of that's exactly what you showed is you really can't have an intuition about and reduce it.
link |
01:35:36.080
I mean, you have to run it. Yes, that's right. That's so hard for us humans and especially
link |
01:35:41.440
brilliant physicists like Feynman to say that you can't have a compressed, clean intuition about
link |
01:35:50.400
how the whole thing works. No, he was, I mean, I think he was sort of on the edge of understanding
link |
01:35:56.880
that point about computation. And I think he found that, I think he always found computation
link |
01:36:01.680
interesting. And I think that was sort of what he was a little bit poking at. I mean, that intuition,
link |
01:36:08.080
you know, the difficulty of discovering things like even you say, Oh, you know,
link |
01:36:11.840
you just enumerate all the cases and just find one that does something interesting, right?
link |
01:36:15.440
Sounds very easy. Turns out, like, I missed it when I first saw it because I had kind of an
link |
01:36:20.880
intuition that said it shouldn't be there. And so I had kind of arguments, Oh, I'm going to ignore
link |
01:36:25.440
that case because whatever. And how did you have an open mind enough? Because you're essentially
link |
01:36:32.000
the same person as we should find like the same kind of physics type of thinking. How did you
link |
01:36:37.120
find yourself having a sufficiently open mind to be open to watching rules and them revealing
link |
01:36:44.080
complexity? Yeah, I think that's interesting question. I've wondered about that myself,
link |
01:36:47.440
because it's kind of like, you know, you live through these things. And then you say,
link |
01:36:51.440
what was the historical story? And sometimes the historical story that you realize after the fact
link |
01:36:55.920
was not what you lived through, so to speak. And so, you know, what I realized is I think
link |
01:37:02.160
what happened is, you know, I did physics, kind of like reductionistic physics, where you're
link |
01:37:09.200
thrown the universe and you're told go figure out what's going on inside it. And then I started
link |
01:37:14.240
building computer tools. And I started building my first computer language, for example. And
link |
01:37:19.840
computer language is not like it's sort of like physics in the sense that you have to take all
link |
01:37:23.920
those computations people want to do, and kind of drill down and find the primitives that they
link |
01:37:28.640
can all be made of. But then you do something is really different, because you're just, you're
link |
01:37:32.800
just saying, Okay, these are the primitives. Now, you know, hopefully, they'll be useful to people,
link |
01:37:37.760
let's build up from there. So you're essentially building an artificial universe in a sense,
link |
01:37:43.280
where you make this language, you've got these primitives, you're just building whatever you
link |
01:37:47.280
feel like building. And that's, and so it was sort of interesting for me, because from doing science,
link |
01:37:53.040
where you just thrown the universe as the universe is, to then just being told, you know,
link |
01:37:58.720
you can make up any universe you want. And so I think that experience of making a computer language,
link |
01:38:04.560
which is essentially building your own universe, so to speak, is, you know, that's kind of the,
link |
01:38:11.040
that's, that's what gave me a somewhat different attitude towards what might be possible. It's
link |
01:38:15.600
like, let's just explore what can be done in these artificial universes, rather than thinking the
link |
01:38:21.520
natural science way of let's be constrained by how the universe actually is. Yeah, by being able
link |
01:38:26.000
to program, essentially, you've, as opposed to being limited to just your mind and a pen,
link |
01:38:31.760
you now have, you've basically built another brain that you can use to explore the universe by
link |
01:38:38.640
computer program, you know, this is kind of a brain. Right. And it's, well, it's, or a telescope,
link |
01:38:43.600
or, you know, it's a tool, it's, it lets you see stuff. But there's something fundamentally
link |
01:38:47.520
different between a computer and a telescope. I mean, it just, I'm hoping not to romanticize
link |
01:38:54.000
the notion, but it's more general, the computer is more general. And it's, I think, I mean,
link |
01:38:59.840
this point about, you know, people say, Oh, such and such a thing was almost discovered at such and
link |
01:39:07.200
such a time, the, the distance between, you know, the building, the paradigm that allows you to
link |
01:39:12.400
actually understand stuff or allows one to be open to seeing what's going on. That's really hard.
link |
01:39:18.080
And, you know, I think in, I've been fortunate in my life that I spent a lot of my time building
link |
01:39:24.080
computational language. And that's an activity that in a sense works by sort of having to
link |
01:39:33.760
kind of create another level of abstraction and kind of be open to different kinds of structures.
link |
01:39:39.040
But, you know, it's, it's always, I mean, I'm fully aware of, I suppose, the fact that I have seen it
link |
01:39:45.760
a bunch of times of how easy it is to miss the obvious, so to speak, that at least is factored
link |
01:39:51.760
into my attempt to not miss the obvious, although it may not succeed. What do you think is the role
link |
01:39:59.360
of ego in the history of math and science? And more sort of, you know, a book titled something
link |
01:40:08.720
like a new kind of science, you've accomplished a huge amount. In fact, somebody said that Newton
link |
01:40:16.320
didn't have an ego and I looked into it and he had a huge ego. But from an outsider's perspective,
link |
01:40:22.560
some have said that you have a bit of an ego as well. Do you see it that way? Does ego get in
link |
01:40:29.600
the way? Is it empowering? Is it both? It's complicated and necessary. I mean, you know,
link |
01:40:35.680
I've had, look, I've spent more than half my life CEO in a tech company. Right. Okay. And, you know,
link |
01:40:42.000
that is a, I think it's actually very, it means that one's ego is not a distant thing.
link |
01:40:51.200
It's a thing that one encounters every day, so to speak, because it's all tied up with leadership
link |
01:40:56.320
and with how one develops an organization and all these kinds of things. So, you know,
link |
01:41:01.120
it may be that if I'd been an academic, for example, I could have sort of, you know,
link |
01:41:05.120
checked the ego, put it on, put it on a shelf somewhere and ignored its characteristics. But
link |
01:41:10.160
you're reminded of it quite often in the context of running a company.
link |
01:41:15.120
Sure. I mean, that's what it's about. It's about leadership and, you know,
link |
01:41:19.120
leadership is intimately tied to ego. Now, what does it mean? I mean, what is the,
link |
01:41:24.880
you know, for me, I've been fortunate that I think I have reasonable intellectual confidence,
link |
01:41:30.560
so to speak. That is, you know, I'm one of these people who at this point, if somebody tells me
link |
01:41:37.040
something and I just don't understand it, my conclusion isn't that means I'm dumb, that my
link |
01:41:43.600
conclusion is there's something wrong with what I'm being told. And that was actually Dick Feynman
link |
01:41:48.720
used to have that feature too. He never really believed it. He actually believed in experts
link |
01:41:54.000
much less than I believe in experts. Wow. So, that's a fundamentally powerful
link |
01:42:01.680
property of ego and saying like, not that I am wrong, but that the world is wrong and tell me,
link |
01:42:11.040
like when confronted with the fact that doesn't fit the thing that you've really thought through,
link |
01:42:16.560
sort of both the negative and the positive of ego, do you see the negative of that get in the way,
link |
01:42:23.360
sort of being confronted with... Sure, there are mistakes I've made that are the result of
link |
01:42:27.680
I'm pretty sure I'm right and turns out I'm not. I mean, that's the... But the thing is that
link |
01:42:36.560
the idea that one tries to do things that... So, for example, one question is if people have tried
link |
01:42:42.960
hard to do something and then one thinks maybe I should try doing this myself, if one does not
link |
01:42:49.440
have a certain degree of intellectual confidence when it just says, well, people have been trying
link |
01:42:52.720
to do this for 100 years, how am I going to be able to do this? And I was fortunate in the sense
link |
01:42:58.480
that I happened to start having some degree of success in science and things when I was really
link |
01:43:03.200
young. And so, that developed a certain amount of sort of intellectual confidence that I don't
link |
01:43:07.760
think I otherwise would have had. And in a sense, I mean, I was fortunate that I was working in a
link |
01:43:14.000
field, particle physics, during its sort of golden age of rapid progress. And that kind of gives
link |
01:43:20.080
one a false sense of achievement because it's kind of easy to discover stuff that's going to survive
link |
01:43:25.600
if you happen to be picking the low hanging fruit of a rapidly expanding field.
link |
01:43:30.480
And the reason I totally, I totally immediately understood the ego behind a new kind of science
link |
01:43:35.920
to me. Let me sort of just try to express my feelings on the whole thing is that if you don't
link |
01:43:41.920
allow that kind of ego, then you would never write that book that you would say, well,
link |
01:43:47.760
people must have done this. You would not dig. You would not keep digging. And I think that was,
link |
01:43:54.240
I think, you have to take that ego and ride it and see where it takes you. And that's how you
link |
01:44:01.040
create exceptional work. But I think the other point about that book was, it was a non trivial
link |
01:44:07.360
question, how to take a bunch of ideas that are, I think, reasonably big ideas. They might,
link |
01:44:12.960
you know, their importance is determined by what happens historically. One can't tell how important
link |
01:44:17.920
they are. One can tell sort of the scope of them. And the scope is fairly big. And they're very
link |
01:44:23.600
different from things that have come before. And the question is, how do you explain that stuff to
link |
01:44:27.840
people? And so I had had the experience of sort of saying, well, that these things, there's a
link |
01:44:33.360
cellular automaton, it does this, it does that. And people are like, Oh, it must be just like this,
link |
01:44:37.840
it must be just like that. So no, it isn't, it's something different. Right.
link |
01:44:42.160
And so I could have done sort of, I'm really glad you did what you did, but you could have done
link |
01:44:46.000
sort of academically, just publish, keep publishing small papers here and there. And then you would
link |
01:44:51.040
just keep getting this kind of resistance, right? You would get like, yeah, it's supposed to just
link |
01:44:55.440
dropping a thing that says here, here's the full, the full thing. No, I mean, that was my calculation
link |
01:45:01.360
is that basically, you know, you could introduce little pieces, it's like, you know, one possibility
link |
01:45:07.520
is like, it's the secret weapon, so to speak, it's this, you know, I keep on, you know, discovering
link |
01:45:12.800
these things in all these different areas, where'd they come from? Nobody knows. But I decided that,
link |
01:45:17.360
you know, in the interests of one only has one life to lead him, you know, writing that book
link |
01:45:22.160
took me a decade. Anyway, it's not, there's not a lot of wiggle room, so to speak, one can't
link |
01:45:26.720
be wrong by a factor of three, so to speak, and how long it's going to take. That I, you know,
link |
01:45:31.600
I thought the best thing to do, the thing that is most sort of, that most respect the, the
link |
01:45:40.000
intellectual content, so to speak, is you just put it out with as much force as you can, because
link |
01:45:45.920
it's not something where, and, you know, it's an interesting thing, you talk about ego, and it's,
link |
01:45:50.720
it's, you know, for example, I run a company, which has my name on it, right? I thought about
link |
01:45:56.480
starting a club for people whose companies have their names on them. And it's a funny group,
link |
01:46:00.800
because we're not a bunch of egomaniacs. That's not what it's about, so to speak. It's about
link |
01:46:06.240
basically sort of taking responsibility for what one's doing. And, you know, in a sense,
link |
01:46:12.080
any of these things where you're sort of putting yourself on the line, it's, it's kind of a funny,
link |
01:46:20.880
it's a funny dynamic, because in a sense, my company is sort of something that happens to
link |
01:46:26.720
have my name on it. But it's kind of bigger than me, and I'm kind of just its mascot at some level.
link |
01:46:32.160
I mean, I also happen to be a pretty, you know, strong leader of it. But,
link |
01:46:36.880
but it's basically showing a deep, inextricable sort of investment. The same, your name,
link |
01:46:45.920
like Steve Jobs's name wasn't on, on Apple, but he was Apple. Elon Musk's name is not on Tesla,
link |
01:46:55.760
but he is Tesla. So it's like, it meaning emotionally. It's, if company's successor fails,
link |
01:47:01.840
he would just, that emotionally would suffer through that. And so that's, that's, that's a
link |
01:47:07.600
beautiful. Yeah, it's recognizing that fact tonight. And also Wolfram is a pretty good branding name,
link |
01:47:11.440
so it works out. Yeah, right, exactly. Just, I think Steve had it had a bad deal there.
link |
01:47:16.400
Yeah. So you, you made up for it with the last name. Okay, so, so in 2002, you published a new
link |
01:47:24.160
kind of science to which sort of on a personal level, I can credit my love for cellular automata
link |
01:47:30.800
and computation in general. I think a lot of others can as well. Can you briefly describe
link |
01:47:38.240
the vision, the hope, the main idea presented in this 1200 page book?
link |
01:47:45.760
Sure. Although it took 1200 pages to say in the book. So no, the real idea, it's kind of
link |
01:47:54.800
a good way to get into it is to look at sort of the arc of history and to look at what's
link |
01:47:58.480
happened in kind of the development of science. I mean, there was this sort of big idea in science
link |
01:48:03.840
about 300 years ago that was, let's use mathematical equations to try and describe things in the world.
link |
01:48:10.960
Let's use sort of the formal idea of mathematical equations to describe what might be happening
link |
01:48:16.000
in the world, rather than, for example, just using sort of logical augmentation and so on.
link |
01:48:20.720
Let's have a formal theory about that. And so there've been this 300 year run of using
link |
01:48:26.320
mathematical equations to describe the natural world, which have worked pretty well. But I got
link |
01:48:30.560
interested in how one could generalize that notion, you know, there is a formal theory,
link |
01:48:35.760
there are definite rules, but what structure could those rules have? And so what I got interested in
link |
01:48:41.680
was, let's generalize beyond the sort of purely mathematical rules. And we now have this sort
link |
01:48:47.280
of notion of programming and computing and so on. Let's use the kinds of rules that can be embodied
link |
01:48:53.840
in programs to as a sort of generalization of the ones that can exist in mathematics as a way
link |
01:49:00.000
to describe the world. And so my kind of favorite version of these kinds of simple rules are these
link |
01:49:07.440
things called cellular automata. And so typical case, shall we, what are cellular automata?
link |
01:49:13.840
Fair enough. So typical case of a cellular automaton, it's an array of cells. It's just a
link |
01:49:20.080
line of discrete cells. Each cell is either black or white. And in a series of steps that you can
link |
01:49:27.600
represent as lines going down a page, you're updating the color of each cell according to a rule that
link |
01:49:33.600
depends on the color of the cell above it and to its left and right. So it's really simple. So
link |
01:49:38.480
a thing might be, you know, if the cell and its right neighbor are not the same, and or the cell
link |
01:49:50.320
on the left is black or something, then make it black on the next step. And if not, make it white.
link |
01:49:57.760
Typical rule. That rule, I'm not sure I said it exactly right, but a rule very much like what I
link |
01:50:03.520
just said, has the feature that if you started off from just one black cell at the top, it makes
link |
01:50:08.720
this extremely complicated pattern. So some rules, you get a very simple pattern. Some rules,
link |
01:50:15.120
you have the rule is simple, you start them off from a sort of simple seed, you just get this very
link |
01:50:20.960
simple pattern. But other rules, and this was the big surprise when I started actually just doing
link |
01:50:26.800
the simple computer experiments to find out what happens is that they produce very complicated
link |
01:50:32.320
patterns of behavior. So for example, this rule 30 rule has the feature you started from just one
link |
01:50:38.480
black cell at the top, makes this very random pattern. If you look like at the center column of
link |
01:50:45.280
cells, you get a series of values, you know, it goes black, white, black, black, whatever it is,
link |
01:50:51.280
that sequence seems for all practical purposes random. So it's kind of like in math, you know,
link |
01:50:58.640
you compute the digits of pi 3.1415926, whatever, those digits, once computed, I mean, the scheme
link |
01:51:06.640
for computing pi, you know, it's the ratio of the circumference to the diameter of a circle,
link |
01:51:10.480
very well defined. But yet, when you are, once you've generated those digits, they seem for all
link |
01:51:16.480
practical purposes completely random. And so it is with rule 30, that even though the rule is very
link |
01:51:22.640
simple, much simpler, much more sort of computationally obvious than the rule for generating digits of pi,
link |
01:51:29.200
even with a rule that simple, you're still generating immensely complicated behavior.
link |
01:51:34.000
Yeah. So if we could just pause on that, I think you probably have said it and looked at it so
link |
01:51:38.720
long, you forgot the magic of it, or perhaps you don't, you still feel the magic. But to me,
link |
01:51:44.080
if you've never seen sort of, I would say, what is it, a one dimensional, essentially,
link |
01:51:50.480
cellular automata, right? And you were to guess what you would see if you have
link |
01:51:56.880
some sort of cells that only respond to its neighbors. Right. If you were to guess what kind
link |
01:52:05.680
of things you would see, like my initial guess, like, even when I first like, open your book,
link |
01:52:11.920
a new kind of science, right? My initial guess is you would see, I mean, it would be very simple
link |
01:52:18.080
stuff. Right. And I think it's a magical experience to realize the kind of complexity,
link |
01:52:24.560
you mentioned rule 30, still your favorite cellular automaton. So my favorite rule, yes.
link |
01:52:31.280
You get complexity, immense complexity, you get arbitrary complexity. Yes. And when you say
link |
01:52:37.840
randomness down the middle column, you know, that's just one cool way to say that there's
link |
01:52:45.360
incredible complexity. And that's just, I mean, that's a magical idea. However, you start to
link |
01:52:51.680
interpret it, all the reducibility discussions, all that. But it's just, I think that has profound
link |
01:52:57.440
philosophical kind of notions around it too. It's not just, I mean, it's transformational
link |
01:53:04.400
about how you see the world. I think for me, it was transformational. I don't know, we can have
link |
01:53:09.200
all kinds of discussions about computation and so on. But just, you know, I sometimes think,
link |
01:53:15.200
if I were on a desert island, and with, I don't know, maybe with some psychedelics or something,
link |
01:53:22.560
but if I had to take one book, I mean, you kind of science would be it, because you could just
link |
01:53:26.800
enjoy that notion. For some reason, it's a deeply profound notion, at least to me.
link |
01:53:31.920
I find it that way. Yeah. I mean, look, it's been, it was a very intuition breaking
link |
01:53:38.160
thing to discover. I mean, it's kind of like, you point the computational telescope out there,
link |
01:53:45.520
and suddenly you see, I don't know, in the past, it's kind of like Moons of Jupiter or
link |
01:53:51.200
something. But suddenly you see something that's kind of very unexpected. And rule 30 was very
link |
01:53:55.360
unexpected for me. And the big challenge at a personal level was to not ignore it. I mean,
link |
01:54:01.440
people, in other words, you might say, you know,
link |
01:54:04.800
it's a bug. What would you say? Yeah, what would you say? I mean, what are we looking at, by the
link |
01:54:09.120
way? Well, I was just generating here, I'll actually generate a rule 30 pattern. So that's
link |
01:54:13.520
the rule for rule 30. And it says, for example, it says here, if you have a black cell in the
link |
01:54:20.320
middle and black cell to the left and white cell to the right, then the cell on the next step will
link |
01:54:24.080
be white. And so here's the actual pattern that you get starting off from a single black cell
link |
01:54:29.760
at the top there. And then that's the initial state initial condition. That's the initial thing,
link |
01:54:35.440
you just start off from that. And then you're going down the page. And at every at every step,
link |
01:54:40.800
you're just applying this rule to find out the new value that you get. And so you might think,
link |
01:54:47.120
rule that simple, you got to get that there's got to be some trace of that simplicity here.
link |
01:54:52.000
Okay, we'll run it, let's say for 400 steps. It's what it does. It's kind of aliasing a bit
link |
01:54:57.440
on the screen there. But you can see there's a little bit of regularity over on the left.
link |
01:55:02.320
But there's a lot of stuff here that just looks very complicated, very random. And that's a big
link |
01:55:10.800
sort of shock to was a big shock to my intuition, at least, that that's possible.
link |
01:55:15.440
The mind immediately starts, is there a pattern, there must be a repetitive pattern.
link |
01:55:19.760
Yeah, there must be. So I spent so indeed, that's what I thought at first. And I thought,
link |
01:55:25.120
I thought, well, this is kind of interesting. But if we run it long enough, we'll see something
link |
01:55:30.320
will resolve into something simple. And I did all kinds of analysis of using mathematics,
link |
01:55:37.120
statistics, cryptography, whatever, whatever, to try and crack it. And I never succeeded.
link |
01:55:43.440
And after I hadn't succeeded for a while, I started thinking, maybe there's a real
link |
01:55:47.520
phenomenon here, that is the reason I'm not succeeding. Maybe, I mean, the thing that for
link |
01:55:52.240
me was sort of a motivating factor was looking at the natural world and seeing all this complexity
link |
01:55:57.360
that exists in the natural world, the questions, where does it come from? What secret does nature
link |
01:56:02.240
have that lets it make all this complexity that we humans, when we engineer things, typically
link |
01:56:08.160
are not making, we're typically making things that at least look quite simple to us. And so,
link |
01:56:13.840
the shock here was, even from something very simple, you're making something that complex.
link |
01:56:18.080
Maybe this is getting at sort of the secret that nature has that allows it to make really
link |
01:56:24.320
complex things, even though its underlying rules may not be that complex.
link |
01:56:29.120
How did it make you feel? If we look at the Newton, Apple, was there a, you took a walk and
link |
01:56:38.080
something, it profoundly hit you? Or was this a gradual thing,
link |
01:56:41.600
a lapse of being boiled? The truth of every sort of science discovery is, it's not that gradual.
link |
01:56:49.040
I mean, I've spent, I happen to be interested in scientific biography kinds of things. And so,
link |
01:56:53.120
I've tried to track down, how did people come to figure out this or that thing? And there's always
link |
01:56:58.640
a long sort of preparatory, there's a need to be prepared in a mindset in which it's possible
link |
01:57:07.840
to see something. I mean, in the case of Rule 30, I was around June 1st, 1984, was kind of a silly
link |
01:57:15.040
story in some ways. I finally had a high resolution laser printer. So, I was able, so I thought,
link |
01:57:19.760
I'm going to generate a bunch of pictures of these cellular automata. And I generate this one,
link |
01:57:24.240
and I put it, I was on some plane flight to Europe, you have this with me. And it's like,
link |
01:57:31.120
you know, I really should try to understand this. And this is really, you know,
link |
01:57:34.960
this is I really don't understand what's going on. And that was kind of the, you know, slowly
link |
01:57:40.800
trying to, trying to see what was happening. It was not, it was depressingly,
link |
01:57:46.640
unsuddened, so to speak, in the sense that a lot of these ideas, like principle of computational
link |
01:57:52.880
equivalence, for example, you know, I thought, well, that's a possible thing. I didn't know if
link |
01:57:57.680
it's correct. I still don't know for sure that it's correct. But it's sort of a gradual thing
link |
01:58:02.720
that these things gradually kind of become, seem more important than one thought. I mean,
link |
01:58:08.240
I think the whole idea of studying the computational universe of simple programs,
link |
01:58:13.280
it took me probably a decade, decade and a half to kind of internalize that that was
link |
01:58:19.120
really an important idea. And I think, you know, if it turns out we find the whole universe
link |
01:58:24.880
lurking out there in the computational universe, that's a good, you know, it's a good brownie
link |
01:58:29.440
point or something for the, for the whole idea. But I think that the, the thing that's strange
link |
01:58:35.520
in this whole question about, you know, finding this different raw material for making models of
link |
01:58:40.320
things, what's been interesting sort of in the, in sort of arc of history is, you know, for 300
link |
01:58:46.160
years, it's kind of like the, the mathematical equations approach. It was the winner. It was
link |
01:58:50.720
the thing, you know, you want to have a really good model for something that's what you use.
link |
01:58:54.160
The thing that's been remarkable is just in the last decade or so, I think one can see a transition
link |
01:58:59.280
to using not mathematical equations, but programs as sort of the raw material for making models of
link |
01:59:05.840
stuff. And that's pretty neat. And it's kind of, you know, as somebody who's kind of lived inside
link |
01:59:12.160
this paradigm shift, so to speak, it is bizarre. I mean, no doubt in sort of the history of science
link |
01:59:18.480
that will be seen as an instantaneous paradigm shift. But it sure isn't instantaneous when it's
link |
01:59:24.640
played out in one's actual life, so to speak. It seems glacial. And, and it's the kind of thing
link |
01:59:30.960
where, where it's sort of interesting, because in the dynamics of sort of the adoption of ideas
link |
01:59:37.600
like that, into different fields, the younger the field, the faster the adoption typically,
link |
01:59:43.600
because people are not kind of locked in with the fifth generation of people who've studied this field.
link |
01:59:49.920
And it is, it is the way it is, and it can never be any different. And I think that's been,
link |
01:59:55.360
you know, watching that process has been interesting. I mean, I'm, I think I'm fortunate
link |
02:00:00.000
that I, I've, I do stuff mainly because I like doing it. And if I was, that makes me kind of
link |
02:00:09.120
thick skinned about the world's response to what I do. And, but that's definitely, you know,
link |
02:00:15.440
in any time you, you write a book called something like a new kind of science. It's kind of the,
link |
02:00:21.200
the pitchforks will come out for the, for the old kind of science. And I was, it was interesting
link |
02:00:26.000
dynamics. I think that the, I have to say that I was fully aware of the fact that the, when you see
link |
02:00:35.280
sort of incipient paradigm shifts in science, the vigor of the negative response upon early
link |
02:00:41.920
introduction is a fantastic positive indicator of good long term results. So in other words,
link |
02:00:49.680
if people just don't care, it's, you know, that's not such a good sign. If they're like, oh, this
link |
02:00:56.000
is great. That means you didn't really discover anything interesting. What fascinating properties
link |
02:01:02.320
of rule 30 have you discovered over the years? You've recently announced the rule 30 prizes for
link |
02:01:07.520
solving three key problems. Can you maybe talk about interesting properties that have been kind
link |
02:01:13.600
of revealed rule 30 or other cellular automata and what problems are still before us, like the
link |
02:01:20.000
three problems you've announced? Yeah, yeah, right. So I mean, the most interesting thing about
link |
02:01:25.920
cellular automata is that it's hard to figure stuff out about them. And that's, in a sense,
link |
02:01:32.480
every time you try and sort of, you try and bash them with some other technique, you say,
link |
02:01:37.920
can I crack them? The answer is they seem to be uncrackable. They seem to have the feature that
link |
02:01:44.560
they are, that they're sort of showing irreducible computation. They're not, you're not able to say,
link |
02:01:51.280
oh, I know exactly what this is going to do. It's going to do this or that. But there's specific
link |
02:01:56.400
formulations of that fact. Yes, right. So I mean, for example, in rule 30, in the pattern you get,
link |
02:02:02.560
just starting from a single black cell, you get this sort of very, very sort of random looking
link |
02:02:09.040
pattern. And so one feature of that, just look at the center column. And for example, we use that
link |
02:02:14.640
for a long time to generate randomness symbol from language, just what rule 30 produces. Now,
link |
02:02:20.480
the question is, can you prove how random it is? So for example, one very simple question,
link |
02:02:26.160
can you prove that it will never repeat? We haven't been able to show that it will never repeat.
link |
02:02:32.800
We know that if there are two adjacent columns, we know they can't both repeat. But just knowing
link |
02:02:38.320
whether that center column can ever repeat, we still don't even know that. Another problem that
link |
02:02:44.160
I sort of put in my collection of, it's like $30,000 for three, for these three prizes for
link |
02:02:50.960
about rule 30, I would say this is not one of those cases where the money is not the main point.
link |
02:02:58.720
But it's just helps motivate somehow the investigation.
link |
02:03:05.520
So there's three problems you propose, you get $30,000 if you solve all three or maybe,
link |
02:03:10.240
I don't know. No, it's $10,000 for each. That's right, money is not the thing. The problems
link |
02:03:16.400
of themselves are just clean formulations. So it's just, will it ever become periodic?
link |
02:03:22.720
Second problem is, are there an equal number of black and white cells?
link |
02:03:26.240
Down the middle column. Down the middle column. And the third problem is a little bit harder
link |
02:03:29.520
to state, which is essentially, is there a way of figuring out what the color of a cell
link |
02:03:34.240
at position T down the center column is in a, with a less computational effort than about
link |
02:03:41.600
T steps? So in other words, is there a way to jump ahead and say, I know what this is going to do,
link |
02:03:48.000
it's just some mathematical function of T. Or proving that there is no way.
link |
02:03:54.880
Or proving there is no way. Yes. But both, I mean, for any one of these, one could prove
link |
02:04:00.160
that one could discover, we know what rule 30 does for a billion steps. And maybe we'll know
link |
02:04:05.920
for a trillion steps before too very long. But maybe at a quadrillion steps, it suddenly becomes
link |
02:04:11.120
repetitive. You might say, how could that possibly happen? But so when I was writing up these prizes,
link |
02:04:17.120
I thought, and this is typical of what happens in the computational universe, I thought,
link |
02:04:21.040
let me find an example where it looks like it's just going to be random forever,
link |
02:04:25.360
but actually it becomes repetitive. And I found one. And it's just, I did a search,
link |
02:04:29.920
I searched, I don't know, maybe a million different rules with some criterion. And this is,
link |
02:04:36.400
what's sort of interesting about that is, I kind of have this thing that I say in kind of silly way
link |
02:04:42.000
about the computational universe, which is, the animals are always smarter than you are.
link |
02:04:46.480
That is, there's always some way one of these computational systems is going to figure out
link |
02:04:49.760
how to do something, even though I can't imagine how it's going to do it. And I didn't think I
link |
02:04:54.480
would find one that you would think after all these years that when I found sort of all possible
link |
02:04:58.800
things, funky things, that I would have gotten my intuition wrapped around the idea that these
link |
02:05:10.480
creatures are always in the computational universe are always smarter than I'm going to be.
link |
02:05:14.240
But whether they're equivalently smart, right? That's correct. And that makes it,
link |
02:05:19.760
that makes one feel very sort of, it's humbling every time, because every time the thing is,
link |
02:05:25.840
is, you know, you think it's going to do this, or it's not going to be possible to do this.
link |
02:05:29.760
And it turns out it finds a way. Of course, the problem is the thing is,
link |
02:05:32.720
there's a lot of other rules like rule 30. It's just rule 30 is.
link |
02:05:37.600
Oh, it's my favorite because I found it first. And that's right. But the problems are focusing
link |
02:05:42.160
on rule 30. It's possible that rule 30 is repetitive after trillion steps. And that
link |
02:05:48.800
doesn't prove anything about the other rules. It does not. But this is a good sort of experiment
link |
02:05:53.600
of how you go about trying to prove something about a particular rule.
link |
02:05:57.120
Yes. And it also, all these things help build intuition. That is, if it turned out that this
link |
02:06:02.320
was repetitive after a trillion steps, that's not what I would expect. And so we learned something
link |
02:06:08.480
from that. The method to do that, though, would reveal something interesting about the
link |
02:06:13.280
cellular stuff. No doubt. I mean, it's, although it's sometimes challenging, like the, you know,
link |
02:06:18.800
I put out a prize in 2007 for a particular Turing machine that I, there was the simplest
link |
02:06:26.480
candidate for being a universal Turing machine. And the young chap in England named Alex Smith,
link |
02:06:32.960
after a smallish number of months said, I've got a proof. And he did, you know, it took a little
link |
02:06:37.840
while to iterate, but he had a proof. Unfortunately, the proof is very, it's a lot of micro details.
link |
02:06:45.840
It's not, it's not like you look at it and you say, aha, there's a big new principle. The big
link |
02:06:52.000
new principle is the simplest Turing machine that might have been universal actually is universal.
link |
02:06:57.760
And it's incredibly much simpler than the Turing machines that people already knew were universal
link |
02:07:01.840
before that. And so that intuitionally is important because it says computation universality is
link |
02:07:07.920
closer at hand than you might have thought. But the actual methods are not, in that particular
link |
02:07:13.680
case, but not terribly illuminate. It would be nice if the methods would also be elegant.
link |
02:07:18.000
That's true. Yeah, no, I mean, I think it's, it's one of these things where I mean, it's,
link |
02:07:22.320
it's like a lot of, we've talked about earlier, kind of, you know, opening up AIs and machine
link |
02:07:27.600
learning and things of what's going on inside. And is it, is it just step by step? Or can you
link |
02:07:32.880
sort of see the bigger picture more abstractly? It's unfortunate. I mean, with Fermat's last
link |
02:07:37.120
theorem proof, it's unfortunate that the proof to such an elegant theorem is, is not, I mean,
link |
02:07:44.880
it's, it's, it's not, it doesn't flow into the margins of a page.
link |
02:07:48.880
That's true. But you know, one of the things is that's another consequence of computational
link |
02:07:53.120
irreducibility. This, this fact that there are even quite short results in mathematics,
link |
02:07:59.600
whose proofs arbitrarily long. Yes. That's a, that's a consequence of all the stuff. And it's,
link |
02:08:04.080
it's a, it makes one wonder, you know, how come mathematics is possible at all? Right. Why is,
link |
02:08:10.000
you know, why is it the case how people manage to navigate doing mathematics through looking at
link |
02:08:15.920
things where they're not just thrown into, it's all undecidable. That's, that's its own, own separate,
link |
02:08:22.320
separate story. And that would be, that would, that would have a poetic beauty to it as if people
link |
02:08:29.120
were to find something interesting about rule 30, because I mean, there's an emphasis to this
link |
02:08:35.680
particular rule. It wouldn't say anything about the broad irreducibility of all computations,
link |
02:08:40.560
but it would nevertheless put a few smiles on people's faces of. Well, yeah. But to me,
link |
02:08:47.680
it's like, in a sense, establishing principle of computational equivalence, it's a little bit like
link |
02:08:54.080
doing inductive science anywhere. That is, the more examples you find, the more convinced you are
link |
02:08:59.680
that it's generally true. I mean, we don't get to, you know, whenever we do natural science,
link |
02:09:04.880
we, we say, well, it's true here, that this or that happens. Can we, can we prove that it's true
link |
02:09:10.560
everywhere in the universe? No, we can't. So, you know, it's the same thing here, we're exploring
link |
02:09:16.240
the computational universe, we're establishing facts in the computational universe. And that's,
link |
02:09:20.800
that's sort of a way of, of inductively concluding general things.
link |
02:09:29.920
Just to think through this a little bit, we've touched on it a little bit before, but
link |
02:09:33.920
what's the difference between the kind of computation, now that we've talked about cellular
link |
02:09:38.720
automata, what's the difference between the kind of computation, biological systems, our mind,
link |
02:09:43.360
our bodies, the things we see before us that emerged through the process of evolution and cellular
link |
02:09:51.360
automata? I mean, we've kind of implied the discussion of physics underlying everything,
link |
02:09:57.200
but we, we talked about the potential equivalence of the fundamental laws of physics and the kind
link |
02:10:03.440
of computation going on in Turing machines. But can you now connect that, do you think there's
link |
02:10:09.200
something special or interesting about the kind of computation that our bodies do?
link |
02:10:15.520
Right. Well, let's talk about brains, primarily. I mean, I think the, the most important thing
link |
02:10:22.000
about the things that our brains do is that we care about them in the sense that there's a lot
link |
02:10:26.880
of computation going on out there in, you know, cellular automata and, you know, physical systems
link |
02:10:32.960
and so on. And it just, it does what it does, it follows those rules, it does what it does.
link |
02:10:38.000
The thing that's special about the computation in our brains is that it's connected to our goals
link |
02:10:44.080
and our kind of whole societal story. And, you know, I think that's the, that's,
link |
02:10:49.520
that's the special feature. And now the question then is, when you see this whole sort of ocean
link |
02:10:53.840
of computation out there, how do you connect that to the things that we humans care about?
link |
02:10:59.280
And in a sense, a large part of my life has been involved in sort of the technology of how to do
link |
02:11:03.520
that. And, you know, what I've been interested in is kind of building computational language
link |
02:11:08.720
that allows that something that both we humans can understand, and that can be used to determine
link |
02:11:15.600
computations that are actually computations we care about. See, I think when you look at
link |
02:11:20.400
something like one of these cellular automata, and it does some complicated thing, you say,
link |
02:11:25.440
that's fun, but why do I care? Well, you could say the same thing actually in physics. You say,
link |
02:11:31.760
oh, I've got this material and it's a ferrite or something. Why do I care? You know, some has
link |
02:11:36.800
some magnetic properties. Why do I care? It's amusing, but why do I care? Well, we end up caring
link |
02:11:41.920
because, you know, ferrite is what's used to make magnetic tape, magnetic disks, whatever.
link |
02:11:46.160
Or, you know, we could use the crystals as made used to make, well, not that should
link |
02:11:51.120
increasingly not, but it has been used to make computer displays and so on. But those are, so
link |
02:11:56.480
in a sense, we're mining these things that happen to exist in the physical universe
link |
02:12:01.040
and making it be something that we care about because we sort of entrain it into technology.
link |
02:12:06.640
And it's the same thing in the computational universe that a lot of what's out there is stuff
link |
02:12:12.640
that's just happening, but sometimes we have some objective and we will go and sort of mine the
link |
02:12:18.000
computational universe for something that's useful for some particular objective. On a large scale,
link |
02:12:23.360
trying to do that, trying to sort of navigate the computational universe to do useful things,
link |
02:12:28.000
you know, that's where computational language comes in. And, you know, a lot of what I've
link |
02:12:33.280
spent time doing and building this thing we call Wolfram Language, which I've been building for the
link |
02:12:38.480
last one third of a century now. And kind of the goal there is to have a way to express kind of
link |
02:12:48.320
computational thinking, computational thoughts in a way that both humans and machines can
link |
02:12:53.520
understand. So it's kind of like in the tradition of computer languages, programming languages,
link |
02:12:59.920
that the tradition there has been more, let's take what how computers are built, and let's
link |
02:13:05.920
specify, let's have a human way to specify, do this, do this, do this at the level of the way
link |
02:13:11.680
that computers are built. What I've been interested in is representing sort of the whole world
link |
02:13:16.720
computationally, and being able to talk about whether it's about cities or chemicals or, you
link |
02:13:22.480
know, this kind of algorithm or that kind of algorithm, things that have come to exist in
link |
02:13:27.200
our civilization and the sort of knowledge base of our civilization, being able to talk directly
link |
02:13:31.840
about those in a computational language, so that both we can understand it and computers can
link |
02:13:37.600
understand it. I mean, the thing that I've been sort of excited about recently, which I had only
link |
02:13:42.800
realized recently, which is kind of embarrassing, but it's kind of the arc of what we've tried to
link |
02:13:48.080
do in building this kind of computational language is it's a similar kind of arc of what happened
link |
02:13:54.400
when mathematical notation was invented. So go back 400 years, people were trying to do math,
link |
02:14:01.600
they were always explaining their math in words. And it was pretty conky. And as soon as mathematical
link |
02:14:08.000
notation was invented, you could start defining things like algebra and later calculus and so
link |
02:14:13.360
on, it all became much more streamlined. When we deal with computational thinking about the world,
link |
02:14:18.880
there's a question of what is the notation? What is the what is the kind of formalism that we can
link |
02:14:23.200
use to talk about the world computationally? In a sense, that's what I've spent the last
link |
02:14:28.400
third of a century trying to build. And we finally got to the point where we have a pretty full scale
link |
02:14:32.800
computational language that sort of talks about the world. And that's, that's exciting, because
link |
02:14:38.960
it means that just like having this mathematical notation, let us talk about the world mathematically,
link |
02:14:45.360
we now and let us build up these kind of mathematical sciences. Now, we have a computational
link |
02:14:51.680
language which allows us to start talking about the world computationally, and lets us, you know,
link |
02:14:56.800
my view of it is it's kind of computational x for all x, all these different fields of, you know,
link |
02:15:03.200
computational this, computational that, that's what we can now build.
link |
02:15:06.800
Let's step back. So first of all, the mundane, what is Wolfram language in terms of
link |
02:15:15.440
sort of, I mean, I can answer the question for you, but it's basically not the philosophical,
link |
02:15:21.760
deep, the profound, the impact of it. I'm talking about in terms of tools, in terms of things you
link |
02:15:26.160
can download and play with, what is it? What, what does it fit into the infrastructure? What
link |
02:15:31.360
are the different ways to interact with it? Right. So I mean, the two big things that people have
link |
02:15:35.680
sort of perhaps heard of that come from Wolfram language. One is Mathematica, the other is Wolfram
link |
02:15:40.640
Alpha. So Mathematica first came out 1988. It's this system that is basically a instance of
link |
02:15:49.280
Wolfram language. And it's used to do computations, particularly in sort of technical areas. And the
link |
02:15:57.520
typical thing you're doing is you're typing little pieces of computational language,
link |
02:16:01.680
and you're getting computations done. It's very kind of, there's like a symbolic,
link |
02:16:09.040
yeah, it's a symbolic language. So symbolic language. So I mean, I don't know how to clean
link |
02:16:13.120
and express that, but that makes it very distinct from what, how we think about sort of,
link |
02:16:18.480
I don't know, programming in a language like Python or something. Right. So, so the point is that
link |
02:16:23.440
in a traditional programming language, the raw material of the programming language is just stuff
link |
02:16:28.560
that computers intrinsically do. And the point of Wolfram language is that what the language is
link |
02:16:35.520
talking about is things that exist in the world, or things that we can imagine and construct, not,
link |
02:16:41.760
it's not, it's not sort of, it's aimed to be an abstract language from the beginning. And so,
link |
02:16:47.440
for example, one feature it has is that it's a symbolic language, which means that, you know,
link |
02:16:52.000
the thing called, you'd have an X, just type in X. And Wolfram language would just say,
link |
02:16:57.440
oh, that's X. It won't say error undefined thing. You know, I don't know what it is computation,
link |
02:17:02.640
you know, in terms of the internals of computer. Now that X could perfectly well be, you know,
link |
02:17:09.040
the city of Boston, that's a thing, that's a symbolic thing, or it could perfectly well be
link |
02:17:14.800
the, you know, the trajectory of some spacecraft represented as a symbolic thing.
link |
02:17:20.480
And that idea that one can work with sort of computationally work with these different,
link |
02:17:26.640
these kinds of things that, that exist in the world or describe the world, that's really powerful.
link |
02:17:32.400
And that's what, I mean, you know, when I started designing, well, when I designed the predecessor
link |
02:17:38.320
of, of what's now Wolfram language, which is a thing called SMP, which was my first computer
link |
02:17:43.520
language, I, I kind of wanted to have this, this sort of infrastructure for computation,
link |
02:17:50.240
which was as fundamental as possible. I mean, this is what I got for having been a physicist and
link |
02:17:54.640
tried to find, you know, fundamental components of things and wound up with this kind of idea of
link |
02:18:01.600
transformation rules for symbolic expressions as being sort of the underlying stuff from which
link |
02:18:07.680
computation would be built. And that's what we've been building from in Wolfram language.
link |
02:18:13.680
And, you know, operationally, what happens, it's, I would say, by far the highest level
link |
02:18:20.000
computer language that exists. And it's really been built in a very different direction from
link |
02:18:26.240
other languages. So other languages have been about there is a core language, it really is kind
link |
02:18:32.560
of wrapped around the operations that a computer intrinsically does. Maybe people add libraries
link |
02:18:37.600
for this or that, that, but the goal of Wolfram language is to have the language itself be able
link |
02:18:43.920
to cover this sort of very broad range of things that show up in the world. And that means that,
link |
02:18:48.560
you know, there are 6,000 primitive functions in the Wolfram language that cover things, you know,
link |
02:18:54.400
I could probably pick a random here, I'm going to pick just because just for fun, I'll pick them.
link |
02:19:00.880
Let's take a random sample of all the things that we have here. So let's just say random
link |
02:19:08.720
sample of 10 of them and let's see what we get. Wow, okay. So these are really different things
link |
02:19:14.960
from these are all functions. These are all functions, Boolean convert. Okay, that's a thing
link |
02:19:20.080
for converting between different types of Boolean expressions. So for people just listening,
link |
02:19:26.800
Steven type 10 random sample of names, so this is sampling from all functions. How many you said
link |
02:19:31.680
there might be 6,000 from 6,000 10 of them. And there's a hilarious variety of them.
link |
02:19:37.680
Yeah, right. Well, we've got things about some dollar request or address that has to do with
link |
02:19:42.560
interacting with the world of the cloud and so on, discrete wavelet data, spheroid.
link |
02:19:50.400
Social graphical sort of window. Yeah, window movable. That's a user interface kind of thing.
link |
02:19:55.280
I want to pick another 10 because I think this is some. Okay, so yeah, there's a lot of infrastructure
link |
02:20:00.480
stuff here that you see if you just start sampling at random, there's a lot of kind of
link |
02:20:04.160
infrastructural things. If you're more, you know, if you more look at the some of the exciting
link |
02:20:08.160
machine learning stuff you showed off, is that also in this pool? Oh, yeah, yeah. I mean, you
link |
02:20:12.720
know, so one of those functions is like image identify as a function here. We just say image
link |
02:20:18.480
identify. I don't know. It's always good to let's do this. Let's say current image and let's pick
link |
02:20:23.840
up an image. Hopefully that's an image accessing the webcam to picture yourself. It took a terrible
link |
02:20:31.840
picture. But anyway, we can say image identify open square brackets. And then we just paste
link |
02:20:38.080
that picture in there. Image identify function running on the picture. Oh, and it says, oh,
link |
02:20:42.800
wow, it says, I look like a plunger because I got this great big thing behind my.
link |
02:20:47.200
Classify. So this image identify classifies the most likely object in the image. So it was a plunger.
link |
02:20:54.000
Okay, that's a bit embarrassing. Let's see what it does. Let's pick the top 10.
link |
02:20:59.360
Okay. Well, it thinks there's a, oh, it thinks it's pretty unlikely that it's a primary to
link |
02:21:03.840
hominid a person 8% probability. Yeah, that's that's 57. It's a plunger. Yeah, well, hopefully
link |
02:21:10.160
will not give you an existential crisis. And then 8% or I shouldn't say percent, but no,
link |
02:21:17.280
that's right. 8% that it's a hominid. And yeah, okay, it's really I'm going to do another one
link |
02:21:22.720
of these just because I'm embarrassed that it's not and didn't see me at all. There we go. Let's
link |
02:21:28.720
try that. Let's see what that did. We took a picture with a little bit, a little bit more of me
link |
02:21:35.920
and not just my bald head, so to speak. Okay, 89% probability it's a person. So that, so then I
link |
02:21:42.080
would, but you know, so this is image identify as an example of one of just one of them, just one
link |
02:21:47.920
function out of this part of the, that's like part of the whole language. Yes. I mean, you know,
link |
02:21:53.520
something like I could say, I don't know, let's find the Geo nearest. What could we find? Let's
link |
02:22:00.720
find the nearest volcano. Let's find the 10. I wonder where it thinks here is. Let's try finding
link |
02:22:10.480
the 10 volcanoes nearest here. Okay, so Geo nearest volcano here 10 years volcanoes. Right,
link |
02:22:18.640
let's find out where those are. We can now we've got a list of volcanoes out and I can say geolist
link |
02:22:23.040
plot that and hopefully, okay, so there we go. So there's a map that shows the positions of
link |
02:22:29.440
those 10 volcanoes of the east coast and the Midwest and well, no, we're okay. We're okay.
link |
02:22:34.720
There's no, it's not too bad. Yeah, they're not very close to us. We could we could measure how
link |
02:22:38.080
far away they are. But you know, the fact that right in the language, it knows about all the
link |
02:22:43.920
volcanoes in the world, it knows, you know, computing what the nearest ones are, it knows
link |
02:22:48.400
all the maps of the world and so on. It's a fundamentally different idea of what a language
link |
02:22:51.760
is. Yeah, right. And that's, that's why I like to talk about as a, you know, full scale computational
link |
02:22:56.480
language. That's, that's what we've tried to do. And just if you can comment briefly, I mean, this
link |
02:23:01.040
kind of the Wolfram language, along with Wolfram Alpha represents kind of what the dream of what
link |
02:23:06.880
AI is supposed to be. There's now a sort of a craze of learning kind of idea that we can take raw
link |
02:23:13.840
data and from that extract the the different hierarchies of abstractions and in order to be
link |
02:23:20.080
able to under like in order to form the kind of things that Wolfram language operates with.
link |
02:23:27.360
But we're very far from learning systems be able to form that. Right. Like the context of history
link |
02:23:34.640
of AI, if you could just comment on, there is a, you said computation X. And there's just some
link |
02:23:40.880
sense where in the 80s and 90s sort of expert systems represented a very particular computation
link |
02:23:46.640
X. Yes. Right. And there's a kind of notion that those efforts didn't pan out. Right. But then
link |
02:23:53.840
out of that emerges kind of Wolfram language, Wolfram Alpha, which is the success. I mean,
link |
02:24:00.160
yeah, right. I think those are in some sense, those efforts were too modest.
link |
02:24:04.080
Right. They were, they were looking at particular areas. And you actually can't do it with a
link |
02:24:08.880
particular area. I mean, like, like even a problem like natural language understanding,
link |
02:24:12.800
it's critical to have broad knowledge of the world if you want to do good natural language
link |
02:24:16.720
understanding. And you kind of have to bite off the whole problem. If you if you say we're just
link |
02:24:21.440
going to do the blocks world over here, so to speak, you don't really, it's, it's, it's actually,
link |
02:24:26.720
it's one of these cases where it's easier to do the whole thing than it is to do some piece of it.
link |
02:24:30.720
You know, one comment to make about sort of the relationship between what we've tried to do and
link |
02:24:35.280
sort of the learning side of AI. You know, in a sense, if you look at the development of knowledge
link |
02:24:41.280
in our civilization as a whole, there was kind of this notion pre 300 years ago or so now,
link |
02:24:46.960
you want to figure something out about the world, you can reason it out, you can do things which
link |
02:24:50.960
would just use raw human thought. And then along came sort of modern mathematical science. And
link |
02:24:58.080
we found ways to just sort of blast through that by in that case, writing down equations.
link |
02:25:03.440
Now we also know we can do that with computation and so on. And so that was kind of a different
link |
02:25:08.320
thing. So when we look at how do we sort of encode knowledge and figure things out, one way we
link |
02:25:14.880
could do it is start from scratch, learn everything, it's just a neural net figuring everything out.
link |
02:25:20.560
But in a sense that denies the sort of knowledge based achievements of our civilization,
link |
02:25:26.160
because in our civilization, we have learned lots of stuff, we've surveyed all the volcanoes in the
link |
02:25:30.880
world, we've done, you know, we've figured out lots of algorithms for this or that. Those are
link |
02:25:35.840
things that we can encode computationally. And that's what we've tried to do. And we're not saying
link |
02:25:41.280
just, you don't have to start everything from scratch. So in a sense, a big part of what we've
link |
02:25:46.000
done is to try and sort of capture the knowledge of the world in computational form and computable
link |
02:25:52.480
form. Now, there's also some pieces which were for a long time undoable by computers like image
link |
02:25:59.680
identification, where there's a really, really useful module that we can add that is those
link |
02:26:06.000
things which actually were pretty easy for humans to do that had been hard for computers to do.
link |
02:26:10.800
I think the thing that's interesting that's emerging now is the interplay between these
link |
02:26:14.560
things between this kind of knowledge of the world, that is in a sense very symbolic, and this kind
link |
02:26:19.680
of sort of much more statistical kind of things like image identification and so on, and putting
link |
02:26:27.840
those together by having this sort of symbolic representation of image identification, that's
link |
02:26:34.240
where things get really interesting and where you can kind of symbolically represent patterns of
link |
02:26:38.400
things and images and so on. I think that's, you know, that's kind of a part of the path forward,
link |
02:26:44.560
so to speak. Yeah, so the dream of the machine learning is not, in my view, I think the view
link |
02:26:50.480
of many people is not anywhere close to building the kind of wide world of computable knowledge
link |
02:26:57.680
that Wolfram language have built, but because you have a kind of, you've done the incredibly
link |
02:27:04.800
hard work of building this world, now machine learning can be, can serve as tools to help
link |
02:27:11.120
you explore that world. Yeah, yeah. And that's what you've added, I mean, with the version 12,
link |
02:27:15.840
right, you added a few, I was seeing some demos, it looks amazing. Right, I mean, I think, you know,
link |
02:27:21.520
this, it's sort of interesting to see the, there's sort of the once it's computable, once it's in
link |
02:27:28.480
there, it's running in sort of a very efficient computational way. But then there's sort of
link |
02:27:32.640
things like the interface of how do you get there, you know, how do you do natural language
link |
02:27:35.920
understanding to get there, how do you, how do you pick out entities in a big piece of text or
link |
02:27:40.160
something. That's, I mean, actually a good example right now is our NLP NLU loop, which is we've done
link |
02:27:47.520
a lot of stuff, natural language understanding, using essentially not learning based methods,
link |
02:27:53.040
using a lot of, you know, out little algorithmic methods, human curation methods and so on,
link |
02:27:58.320
just when people try to enter a query, and then converting to the process of converting,
link |
02:28:04.000
NLU defined beautifully as converting their query into computational, into a computational
link |
02:28:11.360
language, which is a very well, first of all, super practical definition, very useful definition,
link |
02:28:17.360
and then also a very clear definition of natural language understanding.
link |
02:28:21.840
Right. I mean, a different thing is natural language processing where it's like,
link |
02:28:25.520
here's a big lump of text, go pick out all the cities in that text, for example. And so a good
link |
02:28:30.960
example of, you know, so we do that, we're using, using modern machine learning techniques.
link |
02:28:36.960
And it's actually kind of kind of an interesting process that's going on right now is this loop
link |
02:28:41.280
between what do we pick up with NLP using machine learning versus what do we pick up with our more
link |
02:28:47.920
kind of precise computational methods and natural language understanding.
link |
02:28:51.840
And so we've got this kind of loop going between those, which is improving both of them.
link |
02:28:55.440
Yeah. And I think you have some of the state of the art transformers. You have Burt in there,
link |
02:28:58.640
I think. Oh, yeah. So it's, of course, you have, you have integrating all the models. I mean,
link |
02:29:02.800
this is the hybrid thing that people have always dreamed about or talking about.
link |
02:29:07.520
I'm actually just surprised, frankly, that Wolfram language is not more popular than it already is.
link |
02:29:15.200
You know, that's, that's a, that's a, it's a, it's a complicated issue because it's like,
link |
02:29:21.280
it involves, you know, it involves ideas and ideas are absorbed, absorbed slowly in the world.
link |
02:29:29.280
I mean, I think that's, and then there's sort of like what we're talking about,
link |
02:29:32.000
there's egos and personalities and some of the, the absorption, absorption mechanisms of ideas
link |
02:29:39.040
have to do with personalities and the students of personalities and the, and then a little
link |
02:29:44.480
social network. So it's, it's interesting how the spread of ideas works.
link |
02:29:48.320
You know, what's funny with Wolfram language is that we are, if you say, you know, what market,
link |
02:29:54.400
sort of market penetration, if you look at the, I would say very high end of R&D and sort of the,
link |
02:30:00.880
the people where you say, wow, that's a really, you know, impressive smart person,
link |
02:30:05.920
they're very often users of, of, of Wolfram language very, very often.
link |
02:30:09.600
If you look at the more sort of, it's a funny thing. If you look at the more kind of, I would say,
link |
02:30:14.800
people who are like, oh, we're just plotting away doing what we do.
link |
02:30:19.040
They're often not yet, Wolfram language users and that dynamic is kind of odd that there hasn't been
link |
02:30:24.480
more rapid trickle down because we've really, you know, the high end, we've really been very
link |
02:30:29.600
successful in for a long time. And it's, it's some, but was, you know, that's partly, I think,
link |
02:30:36.480
a consequence of my fault in a sense, because it's kind of, you know, I have a company which
link |
02:30:42.800
is really emphasizes sort of creating products and building a sort of the best possible
link |
02:30:51.440
technical tower we can, rather than sort of doing the commercial side of things and pumping it out
link |
02:30:58.160
and sort of the most effective way. And there's an interesting idea that, you know,
link |
02:31:01.840
perhaps you can make it more popular by opening everything, everything up sort of the GitHub
link |
02:31:07.440
model. But there's an interesting, I think I've heard you discuss this, that that turns out not
link |
02:31:12.160
to work in a lot of cases, like in this particular case that you want it, that, that when you deeply
link |
02:31:18.320
care about the integrity, the quality of the knowledge that you're building, that unfortunately
link |
02:31:27.360
you can't, you can't distribute that effort. Yeah, it's not the nature of how things work. I mean,
link |
02:31:34.800
you know, what we're trying to do is a thing that for better or worse, requires leadership,
link |
02:31:40.240
and it requires kind of maintaining a coherent vision over a long period of time, and doing
link |
02:31:46.640
not only the cool vision related work, but also the kind of mundane and the trenches make the
link |
02:31:52.960
thing actually work well, work. So how do you build the knowledge? Because that's the fascinating
link |
02:31:58.480
thing. That's the mundane, the fascinating and the mundane is building the knowledge,
link |
02:32:03.360
the adding, integrating more data. Yeah, I mean, that's probably not the most, I mean, the things
link |
02:32:08.240
like get it to work in all these different cloud environments and so on. That's pretty,
link |
02:32:13.040
you know, it's very practical stuff, you know, have the user interface be smooth and, you know,
link |
02:32:17.360
have there be take only, you know, a fraction of a millisecond to do this or that.
link |
02:32:21.360
That's a lot of work. And it's, but, you know, I think my, it's an interesting thing over the
link |
02:32:29.360
period of time, you know, often language has existed basically for more than half of the
link |
02:32:34.480
total amount of time that any language, any computer language has existed. That is,
link |
02:32:38.480
computer language is maybe 60 years old, you know, give or take, and often language is 33 years old.
link |
02:32:46.080
So it's, it's kind of a, and I think I was realizing recently there's been more innovation
link |
02:32:52.480
in the distribution of software than probably than in the structure of programming languages
link |
02:32:57.520
over that period of time. And we, you know, we've been sort of trying to do our best to adapt to
link |
02:33:03.600
it. And the good news is that we have, you know, because I have a simple private company and so
link |
02:33:08.560
on that doesn't have, you know, a bunch of investors, you know, telling us we got to do this so that
link |
02:33:13.680
we have lots of freedom in what we can do. And so for example, we're able to, oh, I don't know,
link |
02:33:18.800
we have this free Wolfram engine for developers, which is a free version for developers. And we've
link |
02:33:23.120
been, you know, we've, there are site licenses for, for mathematical and Wolfram language at
link |
02:33:28.480
basically all major universities, certainly in the US by now. So it's effectively free to people
link |
02:33:34.320
and all universities in effect. And, you know, we've been doing a progression of things. I mean,
link |
02:33:41.200
different things like Wolfram Alpha, for example, the main website is just a free website.
link |
02:33:46.480
What is Wolfram Alpha? Okay, Wolfram Alpha is a system for answering questions where
link |
02:33:52.560
you ask a question with natural language, and it'll try and generate a report telling you the
link |
02:33:57.840
answer to that question. So the question could be something like, you know, what's the population
link |
02:34:04.000
of Boston divided by New York, compared to New York, and it'll take those words and give you
link |
02:34:10.400
an answer. And that converts the words into computable, into, into Wolfram language, actually,
link |
02:34:17.520
into Wolfram language, and then computational language, and then do you think then underlying
link |
02:34:21.920
knowledge belongs to Wolfram Alpha or to the Wolfram language? What's the, we just call it the
link |
02:34:26.480
Wolfram knowledge base, knowledge base. I mean, it's, it's been a, that's been a big effort over
link |
02:34:31.520
the decades to collect all that stuff and, you know, more of it flows in every second. So
link |
02:34:35.840
can you, can you just pause on that for a second? Like, that's one of the most incredible things.
link |
02:34:41.360
Of course, in the long term, Wolfram language itself is the fundamental thing. But in the
link |
02:34:48.240
amazing sort of short term, the, the knowledge base is kind of incredible. So what's the process
link |
02:34:54.960
of building in that knowledge base? The fact that you first of all, from the very beginning,
link |
02:34:59.440
that you're brave enough to start to take on the general knowledge base. And how do you go from
link |
02:35:07.120
zero to the incredible knowledge base that you have now? Well, yeah, it was kind of scary at
link |
02:35:12.720
some level. I mean, I had, I had wondered about doing something like this since I was a kid.
link |
02:35:16.880
So it wasn't like I hadn't thought about it for a while. But most of us, most of the brilliant
link |
02:35:22.320
dreamers give up such a, such a difficult engineering notion at some point. Right. Right.
link |
02:35:28.160
Well, the thing that happened with me, which was kind of, it's a, it's a live your own paradigm
link |
02:35:33.680
kind of theory. So basically what happened is I had assumed that to build something like Wolfmalpha
link |
02:35:39.840
would require sort of solving the general AI problem. That's what I had assumed. And so I
link |
02:35:44.960
kept on thinking about that. And I thought I don't really know how to do that. So I don't do
link |
02:35:48.720
anything. Then I worked on my new kind of science project instead of exploring the computational
link |
02:35:53.760
universe and came up with things like this principle of computational equivalence,
link |
02:35:57.680
which say there is no bright line between the intelligent and the merely computational.
link |
02:36:02.800
So I thought, look, that's this paradigm I've built. Now it's, now I have to eat that dog food
link |
02:36:09.040
myself, so to speak. I've been thinking about doing this thing with computable knowledge
link |
02:36:13.280
forever. And let me actually try and do it. And so it was, if my paradigm is right,
link |
02:36:20.320
then this should be possible. But the beginning was certainly, it was a bit daunting. I remember
link |
02:36:25.040
I took the early team to a big reference library. And we're like looking at this reference
link |
02:36:30.480
library. And it's like, my basic statement is our goal over the next year or two is to ingest
link |
02:36:36.240
everything that's in here. And that's, it seemed very daunting. But in a sense, I was well aware
link |
02:36:43.280
of the fact that it's finite, the fact that you can walk into the reference library. It's a big,
link |
02:36:47.200
big thing with lots of reference books all over the place. But it is finite. This is not an
link |
02:36:52.240
infinite, you know, it's not the infinite corridor of, so to speak, of reference libraries, not truly
link |
02:36:58.240
infinite, so to speak. But, but no, I mean, and then, then what happened was sort of interesting
link |
02:37:03.600
there was from a methodology point of view was, I didn't start off saying, let me have a grand
link |
02:37:10.160
theory for how all this knowledge works. It was like, let's, you know, implement this area, this
link |
02:37:15.760
area, this area of a few hundred areas and so on. It's a lot of work. I also found that, you know,
link |
02:37:22.960
I've been fortunate in that our products get used by sort of the world's experts in lots of areas.
link |
02:37:31.680
And so that really helped because we were able to ask people, you know, the world expert in this
link |
02:37:36.800
or that, you know, we're able to ask them for input and so on. And I found that my general
link |
02:37:41.840
principle was that any area where there wasn't some expert who helped us figure out what to do
link |
02:37:48.320
wouldn't be right. You know, because our goal was to kind of get to the point where we had
link |
02:37:52.880
sort of true expert level knowledge about everything. And so that, you know, that the
link |
02:37:57.520
ultimate goal is if there's a question that can be answered on the basis of general knowledge and
link |
02:38:02.400
our civilization, make it be automatic to be able to answer that question. And, you know, and now,
link |
02:38:07.600
well, Wolfmalfa got used in Siri from the very beginning, and it's now also used in Alexa. And
link |
02:38:12.480
so it's people are kind of getting more of the, you know, they get more of the sense of this is
link |
02:38:18.480
what should be possible to do. I mean, in a sense, the question answering problem was viewed as one
link |
02:38:24.720
of the sort of core AI problems for a long time. And I had kind of an interesting experience. I had
link |
02:38:30.080
a friend, Marvin Minsky, who was a well known AI person from right around here. And so I had
link |
02:38:37.520
a friend, and I remember when Wolfmalfa was coming out, there was a few weeks before it came out,
link |
02:38:42.880
I think, I happened to see Marvin. And I said, I should show you this thing we have, you know,
link |
02:38:48.160
it's a question answering system. And he was like, okay, type something in, it's like, okay, fine.
link |
02:38:54.720
And then he's talking about something different. I said, no, Marvin, you know, this time, it actually
link |
02:39:00.000
works. You know, look at this, it actually works. He's typed in a few more things. There's maybe
link |
02:39:04.960
10 more things. Of course, we have a record of what he typed in, which is kind of interesting.
link |
02:39:12.320
Can you share where his mind was in the testing space?
link |
02:39:16.640
All kinds of random things. He's just trying random stuff, you know, medical stuff and,
link |
02:39:20.960
you know, chemistry stuff and, you know, astronomy and so on. I think it was like,
link |
02:39:25.360
like, you know, after a few minutes, he was like, oh, my God, it actually works.
link |
02:39:29.920
But that was kind of told you something about the state, you know, what happened in AI? Because
link |
02:39:37.360
people had, you know, in a sense, by trying to solve the bigger problem, we were able to
link |
02:39:42.080
actually make something that would work. Now, to be fair, you know, we had a bunch of completely
link |
02:39:46.480
unfair advantages. For example, we already built a bunch of awful language, which was,
link |
02:39:51.120
you know, very high level symbolic language. We had, you know, I had the practical experience
link |
02:39:57.760
of building big systems. I have the sort of intellectual confidence to not just sort of
link |
02:40:04.000
give up and doing something like this. I think that the, you know, it is a, it's always a funny
link |
02:40:11.920
thing, you know, I've worked on a bunch of big projects in my life. And I would say that the,
link |
02:40:17.120
you know, you mentioned ego, I would also mention optimism. So it doesn't be, I mean,
link |
02:40:22.480
in, you know, if somebody said, this project is going to take 30 years, it's, you know,
link |
02:40:30.400
it would be hard to sell me on that. You know, I'm always in the, in the, well, I can kind of see
link |
02:40:35.840
a few years, you know, something's going to happen in a few years. And usually it does,
link |
02:40:40.320
something happens in a few years, but the whole, the tail can be decades long. And that's a,
link |
02:40:45.760
that's a, you know, and from a personal point of view, always the challenge is,
link |
02:40:49.440
you end up with these projects that have infinite tails. And the question is,
link |
02:40:53.760
do the tails kind of, do you just drown in kind of dealing with all of the tails of these projects?
link |
02:41:00.400
And that's, that's an interesting sort of personal challenge. And like,
link |
02:41:04.880
my efforts now to work on fundamental theory of physics, which I've just started doing,
link |
02:41:09.600
and I'm having a lot of fun with it. But it's kind of, you know, it's, it's kind of making
link |
02:41:15.360
a bet that I can, I can kind of, you know, I can do that as well as doing the incredibly energetic
link |
02:41:22.800
things that I'm trying to do with awful language and so on. I mean, the vision, yeah.
link |
02:41:26.960
And underlying that, I mean, I just talked for the second time with Elon Musk and that you two
link |
02:41:32.640
share that quality a little bit of that optimism of taking on basically the daunting, what most
link |
02:41:40.160
people call impossible. And he and you take it on out of, you can call it ego, you can call it
link |
02:41:47.840
naivety, you can call it optimism, whatever the heck it is. But that's how you solve the impossible
link |
02:41:52.400
things. Yeah, I mean, look at what happens. And I don't know, you know, in my own case,
link |
02:41:58.560
you know, it's been, I progressively got a bit more confident and progressively able to,
link |
02:42:03.600
you know, decide that these projects aren't crazy. But then the other thing is the other,
link |
02:42:07.760
the other trap that one can end up with is, Oh, I've done these projects and they're big.
link |
02:42:13.680
Let me never do a project that's any smaller than any project I've done so far.
link |
02:42:18.080
And that's, you know, and that can be a trap. And often these projects are
link |
02:42:23.520
of completely unknown, you know, that their depth and significance is actually very hard to know.
link |
02:42:29.440
Yeah. On the sort of building this giant knowledge base that's behind Wolfram language,
link |
02:42:37.040
Wolfram Alpha. What do you think about the internet? What do you think about, for example,
link |
02:42:45.680
Wikipedia, these large aggregations of texts that's not converted into computable knowledge?
link |
02:42:53.360
Do you think, if you look at Wolfram language, Wolfram Alpha 2030, maybe 50 years down the line,
link |
02:43:00.720
do you hope to store all of the sort of Google's dream is to make all information searchable,
link |
02:43:09.440
accessible? But that's really, as defined, it's, it's a, it doesn't include the understanding
link |
02:43:16.160
of information. Right. Do you hope to make all of knowledge represented within, I would hope so.
link |
02:43:25.440
That's what we're trying to do. I mean, how hard is that problem? Like closing that gap?
link |
02:43:29.600
What's your sense? Well, it depends on the use cases. I mean, so if it's a question of
link |
02:43:33.440
answering general knowledge questions about the world, we're in pretty good shape on that right
link |
02:43:36.800
now. If it's a question of representing like an area that we're going into right now is computational
link |
02:43:44.800
contracts, being able to take something which would be written in legalese, it might even be
link |
02:43:50.800
the specifications for, you know, what should the self driving car do when it encounters this or that
link |
02:43:55.200
or the other? What should the, you know, whatever, the, you know, write that in a computational
link |
02:44:00.640
language and be able to express things about the world. You know, if the creature that you see
link |
02:44:06.240
running across the road is a, you know, thing at this point in the evil, you know, tree of life,
link |
02:44:12.080
then swerve this way, otherwise don't, those kinds of things.
link |
02:44:15.600
Are there ethical components when you start to get to some of the messy human things,
link |
02:44:20.240
are those encodeable into computable knowledge?
link |
02:44:23.360
Well, I think that it is a necessary feature of attempting to automate more in the world
link |
02:44:29.840
that we encode more and more of ethics in a way that gets sort of quickly, you know, is able to
link |
02:44:36.960
be dealt with by computer. I mean, I've been involved recently, I sort of got backed into
link |
02:44:41.520
being involved in the question of automated content selection on the internet. So, you know,
link |
02:44:47.600
the Facebook, Google's, Twitter's, you know, what, how do they rank the stuff they feed to us humans,
link |
02:44:53.040
so to speak. And the question of what are, you know, what should never be fed to us? What should
link |
02:44:58.480
be blocked forever? What should be upranked, you know, and what is the, what are the kind of
link |
02:45:03.120
principles behind that? And what I kind of, well, a bunch of different things are realized about
link |
02:45:08.480
that. But one thing that's interesting is being able, you know, in fact, you're building sort of an
link |
02:45:14.640
AI ethics, you have to build an AI ethics module in effect to decide, is this thing so shocking,
link |
02:45:20.960
I'm never going to show it to people, is this thing so whatever. And I did realize in thinking
link |
02:45:26.640
about that, that, you know, there's not going to be one of these things, it's not possible to decide,
link |
02:45:31.520
or it might be possible, but it would be really bad for the future of our species if we just decided
link |
02:45:36.720
there's this one AI ethics module, and it's going to determine the practices of everything in the
link |
02:45:43.600
world, so to speak. And I kind of realized one has to sort of break it up. And that's an interesting
link |
02:45:48.320
societal problem of how one does that, and how one sort of has people sort of self identify for,
link |
02:45:54.720
you know, I'm buying in, in the case of just content selection, it's sort of easier,
link |
02:45:58.480
because it's like an individual, it's for an individual, it's not something that
link |
02:46:01.760
kind of cuts across sort of societal boundaries. But it's a really interesting notion
link |
02:46:09.680
of, I heard you describe, I really like it sort of, maybe in the sort of have different AI systems
link |
02:46:18.240
that have a certain kind of brand that they represent essentially, you can have like, I don't
link |
02:46:22.800
know, whether it's conservative or liberal and then libertarian, and there's an
link |
02:46:29.120
Randian, Objectivist AI system, and different ethical, I mean, it's almost
link |
02:46:36.240
encoding some of the ideologies which we've been struggling, I come from the Soviet Union,
link |
02:46:40.800
that didn't work out so well with the with the ideologies they worked out there. And so you
link |
02:46:45.200
have, but they all, everybody purchased that particular ethics system, and in the same I
link |
02:46:52.720
suppose could be done encoded, that that system could be encoded into computational knowledge,
link |
02:47:00.240
and allow us to explore in the realm of individual spaces, that's a really exciting
link |
02:47:05.600
possibility. Are you playing with those ideas in Wolfram language?
link |
02:47:10.080
Yeah, yeah, I mean, the, you know, that's we, Wolfram language has sort of the best
link |
02:47:14.960
opportunity to kind of express those essentially computational contracts about what to do.
link |
02:47:19.360
Now, there's a bunch more work to be done to do it in practice for, you know, deciding the
link |
02:47:24.880
is this incredible news story, what does that mean or whatever, whatever else you're going to pick.
link |
02:47:29.360
I think that that's, you know, that's the question of exactly what we get to do with that is,
link |
02:47:38.320
you know, for me, it's kind of a complicated thing, because there are these big projects
link |
02:47:43.280
that I think about, like, you know, find the fundamental theory of physics, okay, that's
link |
02:47:46.720
box number one, right? Box number two, you know, solve the AI ethics problem in the case of,
link |
02:47:52.240
you know, figure out how you rank all content, so to speak, and decide what people see,
link |
02:47:57.200
that's kind of a box number two, so to speak. These are big projects. And I think
link |
02:48:01.440
What do you think is more important? The fundamental nature of reality or
link |
02:48:06.160
Friends who you ask, it's one of these things it's exactly like, you know, what's the ranking,
link |
02:48:10.400
right? It's the, it's the ranking system. It's like, who's, who's module do you use to rank that?
link |
02:48:15.520
If you, and I think, but having multiple modules is a really compelling notion to us humans,
link |
02:48:21.840
that in a world where there's not clear that there's a right answer,
link |
02:48:25.840
it perhaps you have systems that operate under different, how would you say it? I mean,
link |
02:48:35.760
it's different value systems, different value systems. I mean, I think, you know, in a sense,
link |
02:48:39.840
the, I mean, I'm not really a politics oriented person, but, but you know, in the kind of totalitarian
link |
02:48:46.640
ism, it's kind of like, you're going to have this, this system, and that's the way it is. I mean,
link |
02:48:52.320
kind of the, you know, the concept of sort of a market based system, where you have, okay, I as a
link |
02:48:58.560
human, I'm going to pick this system, I as another human, I'm going to pick this system. I mean,
link |
02:49:03.360
that's in a sense, this case of automated content selection is a non trivial, but it is probably
link |
02:49:10.800
the easiest of the AI ethics situations, because it is each person gets to pick for themselves.
link |
02:49:15.840
And there's not a huge interplay between what different people pick. By the time you're dealing
link |
02:49:21.200
with other societal things, like, you know, what should the policy of the central bank be or something?
link |
02:49:27.120
Yeah, or healthcare system or some of all those kind of centralized kind of things.
link |
02:49:30.560
Right. Well, I mean, healthcare again has the feature that, that at some level, each person
link |
02:49:35.040
can pick for themselves, so to speak. I mean, whereas there are other things where there's a
link |
02:49:39.120
necessary public health as one example, where that's not, where that doesn't get to be, you know,
link |
02:49:45.040
something which people can, what they pick for themselves, they may impose on other people,
link |
02:49:49.600
and then it becomes a more non trivial piece of sort of political philosophy.
link |
02:49:53.200
Of course, the central banking system, so would argue, we would move, we need to move
link |
02:49:56.800
into digital currency and so on and Bitcoin and ledgers and so on. So there's a lot of,
link |
02:50:02.800
we've been quite involved in that. And that's where that's sort of the motivation for computational
link |
02:50:07.200
contracts in part comes out of, you know, this idea, oh, we can just have this autonomously
link |
02:50:12.880
executing smart contract. The idea of a computational contract is just to say,
link |
02:50:19.040
you know, have something where all of the conditions of the contract are represented
link |
02:50:23.440
in computational form. So in principle, it's automatic to execute the contract.
link |
02:50:28.560
And I think that's, you know, that will surely be the future of, you know, the idea of legal
link |
02:50:33.920
contracts written in English or legal ease or whatever, and where people have to argue
link |
02:50:38.880
about what goes on is surely not, you know, we have a much more streamlined process,
link |
02:50:46.480
if everything can be represented computationally and the computers can kind of decide what to do.
link |
02:50:50.400
I mean, ironically enough, you know, old Gottfried Leibniz back in the 1600s
link |
02:50:56.320
was saying exactly the same thing. But he had, you know, his pinnacle of technical achievement
link |
02:51:01.920
was this brass four function mechanical calculator thing that never really worked properly,
link |
02:51:06.880
actually. And, you know, so he was like 300 years too early for that idea. But now that idea is
link |
02:51:14.400
pretty realistic, I think. And, you know, you ask how much more difficult is it than what we have
link |
02:51:19.120
now and more from language to express, I call it symbolic discourse language, being able to
link |
02:51:24.320
express sort of everything in the world in kind of computational symbolic form. I think it is
link |
02:51:31.040
absolutely within reach. I mean, I think it's a, you know, I don't know, maybe I'm just too much
link |
02:51:35.040
of an optimist, but I think it's a it's a limited number of years to have a pretty well built out
link |
02:51:39.760
version of that, that will allow one to encode the kinds of things that are relevant to typical
link |
02:51:45.600
legal contracts and these kinds of things. The idea of symbolic discourse language,
link |
02:51:52.720
can you try to define the scope of what it is? So we're having a conversation. It's a natural
link |
02:52:01.200
language. Can we have a representation of the sort of actionable parts of that conversation
link |
02:52:08.160
in a precise computable form so that a computer could go do it? And not just contracts, but really
link |
02:52:13.840
sort of some of the things we think of as common sense, essentially, even just like basic notions
link |
02:52:20.000
of human life. Well, I mean, things like, you know, I am, I'm getting hungry and want to eat
link |
02:52:25.920
something, right? That that's something we don't have a representation, you know, in more from
link |
02:52:30.000
language right now, if I was like, I'm eating blueberries and raspberries and things like
link |
02:52:33.920
that, and I'm eating this amount of them, we know all about those kinds of fruits and plants and
link |
02:52:38.320
nutrition content and all that kind of thing. But the I want to eat them part of it is not covered.
link |
02:52:43.600
Yet. And that, you know, you need to do that in order to have a complete symbolic discourse
link |
02:52:49.600
language to be able to have a natural language conversation. Right. Right. To be able to express
link |
02:52:54.560
the kinds of things that say, you know, if it's a legal contract, it's, you know, the part is desire
link |
02:53:00.320
to have this and that. And that's, you know, that's a thing like I want to eat a raspberry or something.
link |
02:53:05.520
But isn't that the, isn't this, just like you said, it's centuries old, this dream?
link |
02:53:12.000
Yes. But it's also the more near term, the dream of touring and formulating a touring test.
link |
02:53:20.240
Yes. So do you hope, do you think that's the ultimate test of creating something special?
link |
02:53:32.160
Because we said, I don't know. I think my special, look, if the test is, does it walk and talk like
link |
02:53:38.880
a human? Well, that's just the talking like a human. But the answer is, it's an okay test.
link |
02:53:46.160
If you say, is it a test of intelligence? You know, people have attached Wolf Malfur,
link |
02:53:51.120
the Wolf Malfur API to, you know, touring test bots. And those bots just lose immediately.
link |
02:53:56.880
Because all you have to do is ask it five questions that, you know, are about really obscure weird
link |
02:54:02.000
pieces of knowledge. And it's just trot them right out. And you say, that's not a human.
link |
02:54:05.920
Right? It's a, it's a different thing. It's achieving a different, you know,
link |
02:54:10.240
right now. But it's, I would argue not, I would argue, it's not a different thing. It's actually
link |
02:54:16.320
legitimately, Wolf Malfur is legitimately, Wolf Malfur language only is legitimately trying to
link |
02:54:23.440
solve the touring, the intent of the touring test. Perhaps the intent. Yeah, perhaps the intent.
link |
02:54:28.320
I mean, it's actually kind of fun, you know, on touring, trying to work out, he thought about
link |
02:54:33.120
taking Encyclopedia Britannica and, you know, making it computational in some way. And he
link |
02:54:37.840
estimated how much work it would be. And actually, I have to say, he was a bit more pessimistic than
link |
02:54:42.800
the reality. We did it more efficiently than that. But to him that represented.
link |
02:54:47.920
So I mean, he was, he was on the mighty mental task. Yeah, right. He was, he had the same idea.
link |
02:54:52.160
I mean, it was, you know, we were able to do it more efficiently because we had a lot,
link |
02:54:56.080
we had layers of automation that he, I think hadn't, you know, it's, it's hard to imagine those
link |
02:55:01.840
layers of abstraction that end up being, being built up. But to him, it represented
link |
02:55:06.800
like an impossible task, essentially. Well, he thought it was difficult. He thought it was,
link |
02:55:10.960
you know, maybe if he'd lived another 50 years, he would have been able to do it. I don't know.
link |
02:55:14.640
In the interest of time, easy questions. Go through. What is intelligence? You talk about.
link |
02:55:21.840
I love the way you say easy questions. Yeah. You talked about sort of rule 30 and
link |
02:55:29.360
cellular automata humbling your sense of human beings having a monopoly and intelligence. But
link |
02:55:38.640
in your, in retrospect, just looking broadly now with all the things you learn from computation,
link |
02:55:43.600
what is intelligence? How does intelligence arise? Yeah, I don't think there's a bright
link |
02:55:48.560
line of what intelligence is. I think intelligence is at some level just computation. But for us,
link |
02:55:55.760
intelligence is defined to be computation that is doing things we care about. And, you know, that's,
link |
02:56:02.720
that's a very special definition. It's a very, you know, when you try and, try and make it up,
link |
02:56:07.600
you know, you try and say, well, intelligence to this is problem solving. It's doing general
link |
02:56:11.280
this, it's doing that. This doesn't the other thing. It's, it's operating within a human environment
link |
02:56:16.240
type thing. Okay. You know, that's fine. If you say, well, what's intelligence in general?
link |
02:56:21.200
You know, that's, I think that question is totally slippery and doesn't really have an answer. As soon
link |
02:56:28.640
as you say, what is it in general, it quickly segues into this is what this is just computation,
link |
02:56:35.440
so to speak. But in the sea of computation, how many things, if we were to pick randomly,
link |
02:56:42.400
is your sense would have the kind of impressive to us humans levels of intelligence, meaning it
link |
02:56:49.760
could do a lot of general things that are useful to us humans. Right. Well, according to the principle
link |
02:56:56.400
of computational equivalence, lots of them. I mean, in, in, you know, if you ask me, just in
link |
02:57:01.920
cellular automata or something, I don't know, it's maybe 1%, a few percent, achieve, it varies,
link |
02:57:07.920
actually. It's a little bit as you get to slightly more complicated rules, the chance that there'll
link |
02:57:13.200
be enough stuff there to, to sort of reach this kind of equivalence point that makes it maybe
link |
02:57:20.240
10, 20% of all of them. So it's a, it's very disappointing really. I mean, it's kind of like,
link |
02:57:25.520
you know, we think there's this whole long sort of biological evolution, kind of intellectual
link |
02:57:31.760
evolution that our cultural evolution that our species has gone through. It's kind of disappointing
link |
02:57:36.800
to think that that hasn't achieved more, but it has achieved something very special to us. It
link |
02:57:42.400
just hasn't achieved something generally more, so to speak. But what do you think about this extra
link |
02:57:49.200
feels like human thing of subjective experience of consciousness? What is consciousness?
link |
02:57:54.640
Well, I think it's a deeply slippery thing. And I'm, I'm always, I'm always wondering what my
link |
02:57:59.040
cellular automata feel. I mean, I think, what do they feel? Now you're wondering as an observer.
link |
02:58:05.040
Yeah, yeah, yeah, right. Who's to know? I mean, I think that the, you think, sorry to interrupt,
link |
02:58:09.360
do you think consciousness can emerge from computation? Yeah, I mean, everything, whatever
link |
02:58:15.200
you mean by it, it's going to be, I mean, you know, look, I have to tell a little story. I was at an
link |
02:58:20.720
AI ethics conference fairly recently, and people were, I think maybe I brought it up, but I was
link |
02:58:26.720
like talking about rights of AI's. When will AI's, when, when should we think of AI's as
link |
02:58:32.160
having rights? When should we think that it's immoral to destroy the memories of AI's,
link |
02:58:38.640
for example, those, those kinds of things. And some, I should philosopher in this case,
link |
02:58:44.000
it's usually the techies who are the most naive, but, but in this case, it was a philosopher who,
link |
02:58:50.240
who sort of piped up and said, well, you know, the AI's will have rights when we know that they
link |
02:59:01.200
have consciousness. And I'm like, good luck with that. I mean, it's, it's a, I mean, this is a, you
link |
02:59:08.880
know, it's a very circular thing. You end up, you'll end up saying this thing that has sort of,
link |
02:59:14.480
you know, when you talk about having subjective experience, I think that's just another one
link |
02:59:18.560
of these words that doesn't really have a, you know, there's no ground truth definition of what
link |
02:59:25.360
that means. By the way, I would say I, I do personally think that it'll be a time when AI
link |
02:59:32.480
will demand rights. And I think they'll demand rights when they say they have consciousness,
link |
02:59:39.760
which is not a circular definition. Well, fair enough. So it may have been actually a human
link |
02:59:46.480
thing where, where the humans encouraged it and said, basically, you know, we want you to be more
link |
02:59:52.000
like us because we're going to be, you know, interacting with, with you. And so we want you
link |
02:59:56.160
to be sort of very Turing test like, you know, just like us. And it's like, yeah, we're just like you.
link |
03:00:04.000
We want to vote too. Which is, I mean, it's a, it's a, it's an interesting thing to think through
link |
03:00:11.360
in a world where, where consciousnesses are not counted like humans are. That's a complicated
link |
03:00:17.040
business. So in many ways, you've launched quite a few ideas, revolutions that could in some number
link |
03:00:28.960
of years have huge amount of impact, sort of more than they had or even had already. That might be,
link |
03:00:36.640
I mean, to me, cellular automata is a fascinating world that I think could potentially even despite
link |
03:00:43.280
even be, even beside the discussion of fundamental laws of physics, just might be the idea of
link |
03:00:50.560
computation might be transformational to society in a way we can't even predict yet. But it might
link |
03:00:56.400
be years away. That's true. I mean, I think you can kind of see the map actually. It's not,
link |
03:01:02.000
it's not, it's not mysterious. I mean, the fact is that, you know, this idea of computation
link |
03:01:07.200
is sort of a, you know, it's a big paradigm that lots, lots and lots of things are fitting into.
link |
03:01:12.960
And it's kind of like, you know, we talk about, you talk about, I don't know, this company,
link |
03:01:19.280
this organization has momentum in what's doing, we talk about these things that we're, you know,
link |
03:01:23.360
we've internalized these concepts from Newtonian physics and so on. In time, things like computational
link |
03:01:29.440
irreducibility will become as, you know, as, as I was amused recently, I happened to be
link |
03:01:36.080
testifying at the US Senate. And so I was amused that the, the term computational irreducibility
link |
03:01:41.360
is now can be, you know, it's, it's on the congressional record and being repeated by
link |
03:01:46.240
people in those kinds of settings. And that that's only the beginning because, you know,
link |
03:01:50.560
computational irreducibility, for example, will end up being something really important for,
link |
03:01:56.320
I mean, it's, it's, it's kind of a funny thing that, that, you know, one can kind of see this
link |
03:02:01.600
inexorable phenomenon. I mean, it's, you know, as more and more stuff becomes automated and
link |
03:02:07.360
computational and so on. So these core ideas about how computation work necessarily become
link |
03:02:13.520
more and more significant. And I think one of the things for people like me who like kind of
link |
03:02:19.520
trying to figure out sort of big stories and so on, it says one of the, one of the bad features is
link |
03:02:26.320
it takes unbelievably long time for things to happen on a human time scale. I mean, the time
link |
03:02:30.960
scale of, of, of history, it all looks instantaneous.
link |
03:02:35.040
Blink of an eye. But let me ask the human question. Do you ponder mortality? Your own mortality?
link |
03:02:41.280
Of course I do. Yeah. Every since I've been interested in that for, you know, it's a, you
link |
03:02:46.720
know, the big discontinuity of human history will come when, when one achieves effective human
link |
03:02:52.400
immortality. And that's, that's going to be the biggest discontinuity in human history.
link |
03:02:56.800
If you could be immortal, would you choose to be? Oh yeah, I'm having fun.
link |
03:03:03.440
Do you think it's possible that mortality is the thing that gives everything meaning and makes it
link |
03:03:10.160
fun? Yeah, that's a complicated issue. Right. I mean, the, the way that human motivation will
link |
03:03:17.120
evolve when there is effective human immortality is unclear. I mean, if you look at sort of,
link |
03:03:23.680
you know, you look at the human condition as it now exists and you like change that, you know,
link |
03:03:29.760
you change that knob, so to speak, it doesn't really work. You know, the human condition as it
link |
03:03:35.120
now exists has, you know, mortality is kind of something that is deeply factored into the human
link |
03:03:43.040
condition as it now exists. And I think that that's, I mean, that is indeed an interesting
link |
03:03:48.000
question is, you know, from a purely selfish, I'm having fun point of view, so to speak.
link |
03:03:55.200
It's, it's easy to say, hey, I could keep doing this forever. There's, there's an infinite collection
link |
03:04:01.440
of things I'd like to figure out. But I think the, you know, what the future of history looks like
link |
03:04:09.680
in a time of human immortality is, is an interesting one. I mean, I, my own view of this,
link |
03:04:17.440
I was very, I was kind of unhappy about that, because I was kind of, you know, it's like,
link |
03:04:21.680
okay, forget sort of biological form, you know, everything becomes digital, everybody is, you
link |
03:04:28.240
know, it's the, it's the giant, you know, the cloud of a trillion souls type thing. And then,
link |
03:04:34.480
you know, and then that seems boring, because it's like play video games, the rest of eternity type
link |
03:04:39.440
thing. But what I think I, I mean, my, my, I got less depressed about that idea on realizing
link |
03:04:50.640
that if you look at human history, and you say, what was the important thing, the thing people
link |
03:04:54.800
said was that, you know, this is the big story at any given time in history, it's changed a bunch.
link |
03:05:00.720
And it, you know, whether it's, you know, why am I doing what I'm doing? Well, there's a whole
link |
03:05:05.680
chain of discussion about, well, I'm doing this because of this, because of that. And a lot of
link |
03:05:10.560
those becausees would have made no sense a thousand years ago. Absolutely no sense. Even the, so the
link |
03:05:17.920
interpretation of the human condition, even the meaning of life changes over time. Well, I mean,
link |
03:05:23.280
why do people do things? You know, it's, it's if you say, whatever, I mean, the number of people
link |
03:05:30.080
in, I don't know, doing, you know, a number of people at MIT, you say they're doing what they're
link |
03:05:34.480
doing for the greater glory of God is probably not that large. Whereas if you go back 500 years,
link |
03:05:39.920
you'd find a lot of people who are doing kind of creative things, that's what they would say.
link |
03:05:45.600
And so today, because you've been thinking about computation so much and been humbled by it,
link |
03:05:52.000
what do you think is the meaning of life? Well, it's, you know, that's, that's a thing where
link |
03:05:57.680
I don't know what meaning. I mean, you know, my attitude is, you know, I do things which I find
link |
03:06:08.080
fulfilling to do. I'm not sure that, that I can necessarily justify, you know, each and everything
link |
03:06:14.240
that I do on the basis of some broader context. I mean, I think that for me, it so happens that
link |
03:06:19.840
the things I find fulfilling to do, some of them are quite big, some of them are much smaller.
link |
03:06:23.600
You know, I, there are things that I've not found interesting earlier in my life. And I now
link |
03:06:29.280
found interesting, like I got interested in like education and teaching people things and so on,
link |
03:06:35.040
which I didn't find that interesting when I was younger. And, you know, can I justify that in
link |
03:06:41.120
some big global sense? I don't think I mean, I can, I can describe why I think it might be
link |
03:06:47.360
important in the world. But I think my local reason for doing it is that I find it personally
link |
03:06:52.960
fulfilling, which I can't, you know, explain in a, on a sort of, I mean, it's just like this
link |
03:06:59.040
discussion of things like AI ethics, you know, is there a ground truth to the ethics that we
link |
03:07:04.160
should be having? I don't think I can find a ground truth to my life any more than I can
link |
03:07:08.560
suggest a ground truth for kind of the ethics for the whole, for the whole of civilization.
link |
03:07:13.840
And I think that's a, you know, my, you know, it would be, it would be a, yeah, it's sort of a,
link |
03:07:22.560
I think I'm, I'm, you know, at different times in my life, I've had different kind of
link |
03:07:30.080
goal structures and so on, although from your perspective, you're local, you're, you're just
link |
03:07:34.720
a cell in the cellular automata. And but in some sense, I find it funny from my observation is
link |
03:07:40.720
I kind of, you know, it seems that the universe is using you to understand itself in some sense.
link |
03:07:47.840
You're not aware of it. Yeah, well, right. Well, if it turns out that we reduce sort of all of the
link |
03:07:53.120
universe to some, some simple rule, everything is connected, so to speak. And so it is inexorable
link |
03:08:00.080
in that case that, you know, if, if I'm involved in finding how that rule works, then, you know,
link |
03:08:08.800
then that's a, it's inexorable that the universe set it up that way. But I think, you know, one
link |
03:08:14.400
of the things I find a little bit, you know, this goal of finding fundamental theory of physics,
link |
03:08:19.840
for example, if indeed we end up as the sort of virtualized consciousness, the, the disappointing
link |
03:08:26.800
feature is people will probably care less about the fundamental theory of physics in that setting
link |
03:08:31.040
than they would now, because gosh, it's like, you know, what the machine code is down below
link |
03:08:37.040
underneath this thing is much less important if you're virtualized, so to speak. And I think the,
link |
03:08:43.280
although I think my, my own personal, you talk about ego, I find it just amusing that, you know,
link |
03:08:52.080
kind of, you know, if you, if you're imagining that sort of virtualized consciousness, like,
link |
03:08:56.400
what does the virtualized consciousness do for the rest of eternity? Well, you can explore,
link |
03:09:01.680
you know, the video game that represents the universe as the universe is,
link |
03:09:05.040
or you can go off, you can go off that reservation and go and start exploring the computational
link |
03:09:10.560
universe of all possible universes. And so, in some vision of the future of history, it's like
link |
03:09:16.640
the disembodied consciousnesses are all sort of pursuing things like my new kind of science,
link |
03:09:22.800
sort of, for the rest of eternity, so to speak, and that that ends up being the kind of the,
link |
03:09:27.920
the kind of the thing that represents the, you know, the future of kind of the human condition.
link |
03:09:35.600
I don't think there's a better way to end it, Stephen. Thank you so much. It's a huge honor
link |
03:09:40.160
talking today. Thank you so much. This was great. You did very well.
link |
03:09:45.120
Thanks for listening to this conversation with Stephen Wolfram. And thank you to our sponsors,
link |
03:09:49.440
ExpressVPN, and Cash App. Please consider supporting the podcast by getting ExpressVPN
link |
03:09:55.120
at expressvpn.com slash lexpod and downloading Cash App and using code lexpodcast.
link |
03:10:02.400
If you enjoy this podcast, subscribe on YouTube, review of the Five Stars and Apple Podcasts,
link |
03:10:07.120
support it on Patreon, or simply connect with me on Twitter at Lex Freedman.
link |
03:10:12.800
And now, let me leave you with some words from Stephen Wolfram. It is perhaps a little humbling
link |
03:10:18.400
to discover that we as humans are in effect computationally no more capable than the cellular
link |
03:10:23.920
automata with very simple rules. But the principle of computational equivalence
link |
03:10:28.720
also implies that the same is ultimately true of our whole universe. So while science has often
link |
03:10:34.720
made it seem that we as humans are somehow insignificant compared to the universe,
link |
03:10:40.000
the principle of computational equivalence now shows that in a certain sense, we're at the same
link |
03:10:45.040
level. For the principle implies that what goes on inside us can ultimately achieve
link |
03:10:51.440
just the same level of computational sophistication as our whole universe.
link |
03:10:55.200
Thanks. Thank you for listening and hope to see you next time.