back to index

Michael Levin: Biology, Life, Aliens, Evolution, Embryogenesis & Xenobots | Lex Fridman Podcast #325


small model | large model

link |
00:00:00.000
turns out that if you train a planarian and then cut their heads off, the tail will regenerate a
link |
00:00:04.640
brand new brain that still remembers the original information. I think planaria hold the answer to
link |
00:00:09.760
pretty much every deep question of life. For one thing, they're similar to our ancestors. So they
link |
00:00:14.800
have true symmetry, they have a true brain, they're not like earthworms, they're, you know,
link |
00:00:17.600
they're much more advanced life form. They have lots of different internal organs, but they're
link |
00:00:20.640
these little, they're about, you know, maybe two centimeters in the centimeter to two in size.
link |
00:00:24.560
And they have a head and a tail. And the first thing is planaria are immortal. So they do not
link |
00:00:30.640
age. There's no such thing as an old planarian. So that right there tells you that these theories
link |
00:00:34.320
of thermodynamic limitations on lifespan are wrong. It's not that well over time of everything
link |
00:00:40.080
degrades. No, planaria can keep it going for probably, you know, how long have they been
link |
00:00:44.560
around 400 million years, right? So these are the actual, so the planaria in our lab
link |
00:00:48.640
are actually in physical continuity with planaria that were here 400 million years ago.
link |
00:00:54.880
The following is a conversation with Michael Levin, one of the most fascinating and brilliant
link |
00:01:00.080
biologists I've ever talked to. He and his lab at Tufts University works on novel ways to understand
link |
00:01:07.120
and control complex pattern formation in biological systems. Andre Karpathy, a world
link |
00:01:12.960
class AI researcher, is the person who first introduced me to Michael Levin's work. I bring
link |
00:01:18.880
this up because these two people make me realize that biology has a lot to teach us about AI,
link |
00:01:25.680
and AI might have a lot to teach us about biology. This is the Lex Friedman podcast.
link |
00:01:32.000
To support it, please check out our sponsors in the description. And now, dear friends,
link |
00:01:37.440
here's Michael Levin. Embryogenesis is the process of building the human body from a single cell. I
link |
00:01:44.480
think it's one of the most incredible things that exists on earth from a single embryo. So how does
link |
00:01:50.160
this process work? Yeah, it is an incredible process. I think it's maybe the most magical
link |
00:01:56.080
process there is. And I think one of the most fundamentally interesting things about it is that
link |
00:02:01.520
it shows that each of us takes the journey from so called just physics to mind, right? Because we
link |
00:02:07.120
all start life as a single quiescent, unfertilized oocyte, and it's basically a bag of chemicals,
link |
00:02:12.880
and you look at that and you say, okay, this is chemistry and physics. And then nine months and
link |
00:02:16.720
some years later, you have an organism with high level cognition and preferences and an inner life
link |
00:02:22.320
and so on. And what embryogenesis tells us is that that transformation from physics to mind is
link |
00:02:27.520
gradual. It's smooth. There is no special place where, you know, a lightning bolt says, boom,
link |
00:02:32.560
now you've gone from physics to true cognition. That doesn't happen. And so we can see in this
link |
00:02:37.440
process that the whole mystery, you know, the biggest mystery of the universe, basically,
link |
00:02:41.440
how you get mind from matter. From just physics, in quotes. Yeah. So where's the magic into the
link |
00:02:47.680
thing? How do we get from information encoded in DNA and make physical reality out of that
link |
00:02:54.480
information? So one of the things that I think is really important if we're going to bring in DNA
link |
00:02:59.280
into this picture is to think about the fact that what DNA encodes is the hardware of life. DNA
link |
00:03:05.520
contains the instructions for the kind of micro level hardware that every cell gets to play with.
link |
00:03:09.760
So all the proteins, all the signaling factors, the ion channels, all the cool little pieces of
link |
00:03:14.160
hardware that cells have, that's what's in the DNA. The rest of it is in so called generic laws.
link |
00:03:20.640
And these are laws of mathematics. These are laws of computation. These are laws of physics,
link |
00:03:25.920
of all kinds of interesting things that are not directly in the DNA. And that process, you know,
link |
00:03:32.000
I think the reason I always put just physics in quotes is because I don't think there is such a
link |
00:03:36.800
thing as just physics. I think that thinking about these things in binary categories, like this is
link |
00:03:41.520
physics, this is true cognition, this is as if it's only faking these kinds of things. I think
link |
00:03:45.840
that's what gets us in trouble. I think that we really have to understand that it's a continuum
link |
00:03:49.760
and we have to work up the scaling, the laws of scaling. And we can certainly talk about that.
link |
00:03:53.600
There's a lot of really interesting thoughts to be had there.
link |
00:03:56.640
So the physics is deeply integrated with the information. So the DNA doesn't exist on its own.
link |
00:04:03.200
The DNA is integrated as, in some sense, in response to the laws of physics at every scale.
link |
00:04:10.480
The laws of the environment it exists in.
link |
00:04:14.080
Yeah, the environment and also the laws of the universe. I mean, the thing about the DNA is that
link |
00:04:18.960
it's once evolution discovers a certain kind of machine, that if the physical implementation is
link |
00:04:25.440
appropriate, it's sort of, and this is hard to talk about because we don't have a good vocabulary
link |
00:04:29.920
for this yet, but it's a very kind of a platonic notion that if the machine is there, it pulls down
link |
00:04:36.560
interesting things that you do not have to evolve from scratch because the laws of physics give it
link |
00:04:42.960
to you for free. So just as a really stupid example, if you're trying to evolve a particular
link |
00:04:47.200
triangle, you can evolve the first angle and you evolve the second angle, but you don't need to
link |
00:04:50.720
evolve the third. You know what it is already. Now, why do you know? That's a gift for free
link |
00:04:54.480
from geometry in a particular space. You know what that angle has to be. And if you evolve
link |
00:04:58.240
an ion channel, which is, ion channels are basically transistors, right? They're voltage
link |
00:05:01.920
gated current conductances. If you evolve that ion channel, you immediately get to use things
link |
00:05:06.720
like truth tables. You get logic functions. You don't have to evolve the logic function.
link |
00:05:10.160
You don't have to evolve a truth table. It doesn't have to be in the DNA. You get it for free,
link |
00:05:14.160
right? And the fact that if you have NAND gates, you can build anything you want, you get that for
link |
00:05:17.360
free. All you have to evolve is that first step, that first little machine that enables you to
link |
00:05:22.720
couple to those laws. And there's laws of adhesion and many other things. And this is all that
link |
00:05:27.680
interplay between the hardware that's set up by the genetics and the software that's made, right?
link |
00:05:33.600
The physiological software that basically does all the computation and the cognition and everything
link |
00:05:38.240
else is a real interplay between the information and the DNA and the laws of physics of computation
link |
00:05:43.920
and so on. So is it fair to say, just like this idea that the laws of mathematics are discovered,
link |
00:05:50.640
they're latent within the fabric of the universe in that same way the laws of biology are kind of
link |
00:05:55.520
discovered? Yeah, I think that's absolutely, and it's probably not a popular view, but I think
link |
00:05:59.760
that's right on the money. Yeah. Well, I think that's a really deep idea. Then embryogenesis
link |
00:06:05.520
is the process of revealing, of embodying, of manifesting these laws. You're not building the
link |
00:06:16.000
laws. You're just creating the capacity to reveal. Yes. I think, again, not the standard view of
link |
00:06:23.520
molecular biology by any means, but I think that's right on the money. I'll give you a simple example.
link |
00:06:27.760
Some of our latest work with these xenobots, right? So what we've done is to take some skin
link |
00:06:31.680
cells off of an early frog embryo and basically ask about their plasticity. If we give you a
link |
00:06:36.080
chance to sort of reboot your multicellularity in a different context, what would you do?
link |
00:06:40.400
Because what you might assume by... The thing about embryogenesis is that it's super reliable,
link |
00:06:45.120
right? It's very robust. And that really obscures some of its most interesting features. We get
link |
00:06:50.640
used to it. We get used to the fact that acorns make oak trees and frog eggs make frogs. And we
link |
00:06:54.800
say, well, what else is it going to make? That's what it makes. That's a standard story.
link |
00:06:57.920
But the reality is... And so you look at these skin cells and you say, well, what do they know
link |
00:07:03.600
how to do? Well, they know how to be a passive boring two dimensional outer layer, keeping the
link |
00:07:07.840
bacteria from getting into the embryo. That's what they know how to do. Well, it turns out that if
link |
00:07:11.200
you take these skin cells and you remove the rest of the embryo, so you remove all of the rest of
link |
00:07:17.040
the cells and you say, well, you're by yourself now, what do you want to do? So what they do is
link |
00:07:20.960
they form this multi little creature that runs around the dish. They have all kinds of incredible
link |
00:07:26.480
and incredible capacities. They navigate through mazes. They have various behaviors that they do
link |
00:07:30.960
both independently and together. Basically, they implement von Neumann's dream of self replication,
link |
00:07:38.960
because if you sprinkle a bunch of loose cells into the dish, what they do is they run around,
link |
00:07:42.560
they collect those cells into little piles. They sort of mush them together until those little
link |
00:07:46.800
piles become the next generation of xenobots. So you've got this machine that builds copies of
link |
00:07:50.960
itself from loose material in its environment. None of this are things that you would have expected
link |
00:07:56.720
from the frog genome. In fact, the genome is wild type. There's nothing wrong with their genetics.
link |
00:08:01.280
Nothing has been added, no nanomaterials, no genomic editing, nothing. And so what we have
link |
00:08:06.320
done there is engineered by subtraction. What you've done is you've removed the other cells
link |
00:08:11.360
that normally basically bully these cells into being skin cells. And you find out that what they
link |
00:08:15.920
really want to do is to be this, their default behaviors to be a xenobot. But in vivo, in the
link |
00:08:21.680
embryo, they get told to be skinned by these other cell types. And so now here comes this really
link |
00:08:28.640
interesting question that you just posed. When you ask where does the form of the tadpole and
link |
00:08:33.760
the frog come from, the standard answer is, well, it's selection. So over millions of years,
link |
00:08:39.920
it's been shaped to produce the specific body that's fit for froggy environments.
link |
00:08:44.720
Where does the shape of the xenobot come from? There's never been any xenobots. There's never
link |
00:08:48.240
been selection to be a good xenobot. These cells find themselves in the new environment.
link |
00:08:51.920
In 48 hours, they figure out how to be an entirely different protoorganism with new capacities like
link |
00:08:57.920
kinematic self replication. That's not how frogs or tadpoles replicate. We've made it impossible
link |
00:09:02.000
for them to replicate their normal way. Within a couple of days, these guys find a new way of
link |
00:09:05.600
doing it that's not done anywhere else in the biosphere. Well, actually, let's step back and
link |
00:09:09.200
define, what are xenobots? So a xenobot is a self assembling little protoorganism. It's also a
link |
00:09:16.320
biological robot. Those things are not distinct. It's a member of both classes. How much is it
link |
00:09:22.000
biology? How much is that robot? At this point, most of it is biology because what we're doing is
link |
00:09:28.160
we're discovering natural behaviors of the cells and also of the cell collectives. Now, one of the
link |
00:09:35.120
really important parts of this was that we're working together with Josh Bongaert's group at
link |
00:09:39.440
University of Vermont. They're computer scientists, they do AI, and they've basically been able to
link |
00:09:45.040
use a simulated evolution approach to ask, how can we manipulate these cells, give them signals,
link |
00:09:51.440
not rewire their DNA, so not hardware, but experience signals? So can we remove some cells?
link |
00:09:56.080
Can we add some cells? Can we poke them in different ways to get them to do other things?
link |
00:09:59.920
So in the future, there's going to be, we're now, and this is future unpublished work, but
link |
00:10:04.400
we're doing all sorts of interesting ways to reprogram them to new behaviors. But before you
link |
00:10:08.720
can start to reprogram these things, you have to understand what their innate capacities are.
link |
00:10:13.040
Okay, so that means engineering, programming, you're engineering them in the future. And in
link |
00:10:19.520
some sense, the definition of a robot is something you in part engineer versus evolve. I mean,
link |
00:10:28.400
it's such a fuzzy definition anyway, in some sense, many of the organisms within our body
link |
00:10:33.280
are kinds of robots. And I think robots is a weird line because it's, we tend to see robots
link |
00:10:40.640
as the other. I think there will be a time in the future when there's going to be something akin to
link |
00:10:45.760
the civil rights movements for robots, but we'll talk about that later perhaps. Anyway, so how do
link |
00:10:52.800
you, can we just linger on it? How do you build a Xenobot? What are we talking about here? From
link |
00:11:00.560
when does it start and how does it become the glorious Xenobot?
link |
00:11:06.640
Yeah, so just to take one step back, one of the things that a lot of people get stuck on is they
link |
00:11:12.080
say, well, you know, engineering requires new DNA circuits or it requires new nanomaterials,
link |
00:11:19.120
you know, what the thing is, we are now moving from old school engineering, which use passive
link |
00:11:24.560
materials, right? That things, you know, wood, metal, things like this, that basically the only
link |
00:11:28.480
thing you could depend on is that they were going to keep their shape. That's it. They don't do
link |
00:11:31.280
anything else. It's on you as an engineer to make them do everything they're going to do.
link |
00:11:35.120
And then there were active materials and now computation materials. This is a whole new era.
link |
00:11:39.040
These are agential materials. This is you're now collaborating with your substrate because your
link |
00:11:43.600
material has an agenda. These cells have, you know, billions of years of evolution. They have goals.
link |
00:11:51.280
They have preferences. They're not just going to sit where you put them. That's hilarious that you
link |
00:11:54.160
have to talk your material into keeping its shape. That's it. That is exactly right. That is exactly
link |
00:11:58.880
right. Stay there. It's like getting a bunch of cats or something and trying to organize the shape
link |
00:12:04.400
out of them. It's funny. We're on the same page here because in a paper, this is, this is currently
link |
00:12:08.640
just been accepted in nature by engineering. One of the figures I have is building a tower
link |
00:12:12.800
out of Legos versus dogs, right? So think about the difference, right? If you build out of Legos,
link |
00:12:17.360
you have full control over where it's going to go. But if somebody knocks it over, it's game over.
link |
00:12:22.800
With the dogs, you cannot just come and stack them. They're not going to stay that way. But
link |
00:12:26.240
the good news is that if you train them, then somebody knocks it over, they'll get right back
link |
00:12:29.680
up. So it's all right. So as an engineer, what you really want to know is what can they depend
link |
00:12:33.760
on this thing to do, right? That's really, you know, a lot of people have definitions of robots
link |
00:12:37.440
as far as what they're made of or how they got here, you know, design versus evolve, whatever.
link |
00:12:41.360
I don't think any of that is useful. I think, I think as an engineer, what you want to know is
link |
00:12:45.200
how much can I depend on this thing to do when I'm not around to micromanage it? What level of,
link |
00:12:50.960
what level of dependency can I, can I give this thing? How much agency does it have?
link |
00:12:54.400
Which then tells you what techniques do you use? So do you use micromanagement,
link |
00:12:57.360
like you put everything where it goes? Do you train it? Do you give it signals? Do you try
link |
00:13:01.200
to convince it to do things, right? How much, you know, how intelligent is your substrate?
link |
00:13:04.560
And so now we're moving into this, into this area where you're, you're, you're working with
link |
00:13:08.480
agential materials. That's a collaboration. That's not, that's not old, old style.
link |
00:13:12.560
What's the word you're using? Agential?
link |
00:13:14.320
Agential.
link |
00:13:14.880
Yeah.
link |
00:13:15.040
What's that mean?
link |
00:13:15.680
Agency. It comes from the word agency. So, so basically the material has agency, meaning that
link |
00:13:20.160
it has some, some level of obviously not human level, but some level of preferences, goals,
link |
00:13:26.000
memories, ability to remember things, to compute into the future, meaning anticipate,
link |
00:13:30.640
you know, when you're working with cells, they have all of that to some, to various degrees.
link |
00:13:34.800
Is that empowering or limiting having material as a mind of its own, literally?
link |
00:13:39.920
I think it's both, right? So it raises difficulties because it means that
link |
00:13:43.600
it, if you, if you're using the old mindset, which is a linear kind of extrapolation of what's going
link |
00:13:48.880
to happen, you're going to be surprised and shocked all the time because biology does not
link |
00:13:54.320
do what we linearly expect materials to do. On the other hand, it's massively liberating. And
link |
00:13:59.200
so in the following way, I've argued that advances in regenerative medicine require us to take
link |
00:14:04.240
advantage of this because what it means is that you can get the material to do things that you
link |
00:14:09.040
don't know how to micromanage. So just as a simple example, right? If you, if you, you had a rat
link |
00:14:13.840
and you wanted this rat to do a circus trick, put a ball in the little hoop, you can do it the
link |
00:14:19.120
micromanagement way, which is try to control every neuron and try to play the thing like a puppet,
link |
00:14:22.960
right? And maybe someday that'll be possible, maybe, or you can train the rat. And this is
link |
00:14:26.960
why humanity for thousands of years before we knew any neuroscience, we had no idea what's
link |
00:14:31.040
behind, what's between the ears of any animal. We were able to train these animals because once you
link |
00:14:35.040
recognize the level of agency of a certain system, you can use appropriate techniques. If you know
link |
00:14:40.480
the currency of motivation, reward and punishment, you know how smart it is, you know what kinds of
link |
00:14:44.160
things it likes to do. You are searching a much more, much smoother, much nicer problem space than
link |
00:14:50.080
if you try to micromanage the thing. And in regenerative medicine, when you're trying to get,
link |
00:14:54.080
let's say an arm to grow back or an eye to repair a cell birth defect or something,
link |
00:14:57.920
do you really want to be controlling tens of thousands of genes at each point to try to
link |
00:15:02.960
micromanage it? Or do you want to find the high level modular controls that say,
link |
00:15:07.760
build an arm here. You already know how to build an arm. You did it before, do it again.
link |
00:15:11.360
So that's, I think it's both, it's both difficult and it challenges us to develop new ways of
link |
00:15:15.920
engineering and it's hugely empowering. Okay. So how do you do, I mean, maybe sticking with
link |
00:15:21.760
the metaphor of dogs and cats, I presume you have to figure out the, find the dogs and dispose of
link |
00:15:31.120
the cats. Because, you know, it's like the old herding cats is an issue. So you may be able to
link |
00:15:38.400
train dogs. I suspect you will not be able to train cats. Or if you do, you're never going to
link |
00:15:44.800
be able to trust them. So is there a way to figure out which material is amenable to herding? Is it in
link |
00:15:53.040
the lab work or is it in simulation? Right now it's largely in the lab because we, our simulations
link |
00:15:59.360
do not capture yet the most interesting and powerful things about biology. So the simulation
link |
00:16:04.560
does, what we're pretty good at simulating are feed forward emergent types of things,
link |
00:16:10.480
right? So cellular automata, if you have simple rules and you sort of roll those forward for
link |
00:16:15.120
every, every agent or every cell in the simulation, then complex things happen, you know, ant colony
link |
00:16:19.360
or algorithms, things like that. We're good at that. And that's, and that's fine. The difficulty
link |
00:16:23.600
with all of that is that it's incredibly hard to reverse. So this is a really hard inverse problem,
link |
00:16:28.400
right? If you look at a bunch of termites and they make a, you know, a thing with a single chimney
link |
00:16:31.520
and you say, well, I like it, but I'd like two chimneys. How do you change the rules of behavior
link |
00:16:36.080
free termites? So they make two chimneys, right? Or, or if you say, here are a bunch of cells that are
link |
00:16:40.320
creating this kind of organism. I don't think that's optimal. I'd like to repair that birth
link |
00:16:44.720
defect. How do you control all the, all the individual low level rules, right? All the protein
link |
00:16:49.040
interactions and everything else, rolling it back from the anatomy that you want to the low level
link |
00:16:53.520
hardware rules is in general intractable. It's a, it's an inverse problem that's generally not
link |
00:16:57.360
solvable. So right now it's mostly in the lab because what we need to do is we need to understand
link |
00:17:02.960
how biology uses top down controls. So the idea is not, not bottom up emergence, but the idea of
link |
00:17:09.520
things like a goal directed test operate exit kinds of loops where, where it's basically an
link |
00:17:14.560
error minimization function over a new space and not a space of gene expression, but for example,
link |
00:17:19.120
a space of anatomy. So just as a simple example, if you have, you have a salamander and it's got
link |
00:17:23.760
an arm, you can, you can amputate that arm anywhere along the length. It will grow exactly
link |
00:17:29.040
what's needed and then it stops. That's the most amazing thing about regeneration is that it stops
link |
00:17:32.880
it knows when to stop. When does it stop? It stops when a correct salamander arm has been completed.
link |
00:17:37.280
So that tells you that's right. That's a, that's a, a means ends kind of analysis where it has to
link |
00:17:42.880
know what the correct limb is supposed to look like, right? So it has a way to ascertain the
link |
00:17:47.280
current shape. It has a way to measure that Delta from, from what shape it's supposed to be. And it
link |
00:17:51.360
will keep taking actions, meaning remodeling and growing and everything else until that's complete.
link |
00:17:55.600
So once you know that, and we've taken advantage of this in the lab to do some, some really wild
link |
00:17:59.200
things with, with both planaria and frog embryos and so on, once you know that, you can start
link |
00:18:04.400
playing with that, with that homeostatic cycle. You can ask, for example, well, how does it remember
link |
00:18:08.880
what the correct shape is? And can we mess with that memory? Can we give it a false memory of
link |
00:18:12.240
what the shape should be and let the cells build something else? Or can we mess with the measurement
link |
00:18:16.160
apparatus, right? So it gives you, it gives you those kinds of, so, so, so the idea is to
link |
00:18:21.680
basically appropriate a lot of the approaches and concepts from cognitive neuroscience and
link |
00:18:28.240
behavioral science into things that previously were taken to be dumb materials. And, you know,
link |
00:18:33.600
you get yelled at in class if you, if you, for being anthropomorphic, if you said, well, my cells
link |
00:18:37.440
want to do this and my cells want to do that. And I think, I think that's a, that's a major mistake
link |
00:18:41.280
that leaves a ton of capabilities on the table. So thinking about biologic systems as things that
link |
00:18:45.920
have memory, have almost something like cognitive ability, but I mean, how incredible is it,
link |
00:18:56.560
you know, that the salamander arm is being rebuilt, not with a dictator. It's kind of like
link |
00:19:03.600
the cellular automata system. All the individual workers are doing their own thing. So where's that
link |
00:19:10.320
top down signal that does the control coming from? Like, how can you find it? Like, why does it stop
link |
00:19:16.080
growing? How does it know the shape? How does it have memory of the shape? And how does it tell
link |
00:19:21.120
everybody to be like, whoa, whoa, whoa, slow down, we're done. So the first thing to think about,
link |
00:19:26.080
I think, is that there are no examples anywhere of a central dictator, because in this kind of
link |
00:19:33.680
science, because everything is made of parts. And so we, even though we feel as a unified central
link |
00:19:40.480
sort of intelligence and kind of point of cognition, we are a bag of neurons, right?
link |
00:19:45.840
All intelligence is collective intelligence. There's this, this is important to kind of
link |
00:19:50.720
think about, because a lot of people think, okay, there's real intelligence, like me,
link |
00:19:54.560
and then there's collective intelligence, which is ants and flocks of birds and termites and
link |
00:19:59.280
things like that. And maybe it's appropriate to think of them as an individual, and maybe it's
link |
00:20:05.520
not, and a lot of people are skeptical about that and so on. But you've got to realize that
link |
00:20:09.520
we are not, there's no such thing as this like indivisible diamond of intelligence that's like
link |
00:20:13.760
this one central thing that's not made of parts. We are all made of parts. And so if you believe,
link |
00:20:19.600
which I think is hard to get around, that we in fact have a centralized set of goals and
link |
00:20:25.520
preferences and we plan and we do things and so on, you are already committed to the fact that
link |
00:20:30.240
a collection of cells is able to do this, because we are a collection of cells. There's no getting
link |
00:20:34.000
around that. In our case, what we do is we navigate the three dimensional world and we
link |
00:20:37.920
have behavior. This is blowing my mind right now, because we are just a collection of cells.
link |
00:20:41.840
Oh yeah. So when I'm moving this arm, I feel like I'm the central dictator of that action,
link |
00:20:50.560
but there's a lot of stuff going on. All the cells here are collaborating in some interesting way.
link |
00:20:57.840
They're getting signal from the central nervous system.
link |
00:21:00.880
Well, even the central nervous system is misleadingly named because it isn't really
link |
00:21:05.600
central. Again, it's just a bunch of cells. I mean, all of them, right? There are no,
link |
00:21:10.800
there are no singular indivisible intelligences anywhere. We are all, every example that we've
link |
00:21:16.240
ever seen is a collective of something. It's just that we're used to it. We're used to that. We're
link |
00:21:21.040
used to, okay, this thing is kind of a single thing, but it's really not. You zoom in, you know
link |
00:21:24.080
what you see. You see a bunch of cells running around. Is there some unifying, I mean, we're
link |
00:21:29.360
jumping around, but that something that you look at as the bioelectrical signal versus the
link |
00:21:36.000
biochemical, the chemistry, the electricity, maybe the life is in that versus the cells.
link |
00:21:47.680
It's the, there's an orchestra playing and the resulting music is the dictator.
link |
00:21:57.120
That's not bad. That's Dennis Noble's kind of view of things. He has two really good books
link |
00:22:02.560
where he talks about this musical analogy, right? So I think that's, I like it. I like it.
link |
00:22:07.360
Is it wrong though?
link |
00:22:08.640
I don't think it's, no, I don't think it's wrong. I don't think it's wrong. I think the important
link |
00:22:13.600
thing about it is that we have to come to grips with the fact that a true proper cognitive
link |
00:22:23.040
intelligence can still be made of parts. Those things are, and in fact it has to be, and I think
link |
00:22:27.920
it's a real shame, but I see this all the time. When you have a collective like this, whether it
link |
00:22:32.800
be a group of robots or a collection of cells or neurons or whatever, as soon as we gain some
link |
00:22:40.880
insight into how it works, meaning that, oh, I see, in order to take this action, here's the
link |
00:22:45.360
information that got processed via this chemical mechanism or whatever. Immediately people say,
link |
00:22:50.320
oh, well then that's not real cognition. That's just physics. I think this is fundamentally
link |
00:22:54.880
flawed because if you zoom into anything, what are you going to see? Of course you're just going to
link |
00:22:58.720
see physics. What else could be underneath, right? It's not going to be fairy dust. It's going to be
link |
00:23:01.680
physics and chemistry, but that doesn't take away from the magic of the fact that there are certain
link |
00:23:05.920
ways to arrange that physics and chemistry and in particular the bioelectricity, which I like a lot,
link |
00:23:11.440
to give you an emergent collective with goals and preferences and memories and anticipations
link |
00:23:18.640
that do not belong to any of the subunits. So I think what we're getting into here,
link |
00:23:22.160
and we can talk about how this happens during embryogenesis and so on, what we're getting into
link |
00:23:26.640
is the origin of a self with a capital S. So we ourselves, there are many other kinds of
link |
00:23:33.360
selves, and we can tell some really interesting stories about where selves come from and how they
link |
00:23:37.120
become unified. Yeah, is this the first, or at least humans tend to think that this is the
link |
00:23:42.880
level of which the self with a capital S is first born, and we really don't want to see
link |
00:23:49.440
human civilization or Earth itself as one living organism. Yeah, that's very uncomfortable to us.
link |
00:23:54.720
It is, yeah. But is, yeah, where's the self born? We have to grow up past that. So what I like to do
link |
00:24:01.200
is, I'll tell you two quick stories about that. I like to roll backwards. So as opposed to, so if
link |
00:24:06.560
you start and you say, okay, here's a paramecium, and you see it, you know, it's a single cell
link |
00:24:10.560
organism, you see it doing various things, and people will say, okay, I'm sure there's some
link |
00:24:14.320
chemical story to be told about how it's doing it, so that's not a paramecium.
link |
00:24:18.160
So that's not true cognition, right? And people will argue about that. I like to work it backwards.
link |
00:24:23.360
I say, let's agree that you and I, as we sit here, are examples of true cognition, if anything,
link |
00:24:28.880
as if there's anything that's true cognition, we are examples of it. Now let's just roll back
link |
00:24:32.960
slowly, right? So you roll back to the time when you were a small child and used to doing whatever,
link |
00:24:36.800
and then just sort of day by day, you roll back, and eventually you become more or less that
link |
00:24:41.280
paramecium, and then you sort of even below that, right, as an unfertilized OSI. So
link |
00:24:46.560
it's, no one has, to my knowledge, no one has come up with any convincing discrete step at which
link |
00:24:53.840
my cognitive powers disappear, right? It just doesn't, the biology doesn't offer any specific
link |
00:24:59.040
step. It's incredibly smooth and slow and continuous. And so I think this idea that it just
link |
00:25:04.000
sort of magically shows up at one point, and then, you know, humans have true selves that don't exist
link |
00:25:10.080
elsewhere, I think it runs against everything we know about evolution, everything we know about
link |
00:25:13.840
developmental biology, these are all slow continua. And the other really important story I
link |
00:25:18.400
want to tell is where embryos come from. So think about this for a second. Amniote embryos, so this
link |
00:25:23.280
is humans, birds, and so on, mammals and birds and so on. Imagine a flat disk of cells, so there's
link |
00:25:29.200
maybe 50,000 cells. And in that, so when you get an egg from a fertilized, let's say you buy a
link |
00:25:35.120
fertilized egg from a farm, right? That egg will have about 50,000 cells in a flat disk, it looks
link |
00:25:42.560
like a little tiny little frisbee. And in that flat disk, what'll happen is there'll be one set
link |
00:25:50.080
of cells will become special, and it will tell all the other cells, I'm going to be the head,
link |
00:25:56.000
you guys don't be the head. And so it'll amplify symmetry breaking amplification, you get one
link |
00:26:00.160
embryo, there's some neural tissue and some other stuff forms. Now, you say, okay, I had one egg
link |
00:26:06.320
and one embryo, and there you go, what else could it be? Well, the reality is, and I used to, I did
link |
00:26:10.880
all of this as a grad student, if you take a little needle, and you make a scratch in that
link |
00:26:16.400
blastoderm in that disk, such that the cells can't talk to each other for a while, it heals up, but
link |
00:26:20.720
for a while, they can't talk to each other. What will happen is that both regions will decide that
link |
00:26:26.240
they can be the embryo, and there will be two of them. And then when they heal up, they become
link |
00:26:29.120
conjoint twins, and you can make two, you can make three, you can make lots. So the question of how
link |
00:26:33.920
many cells are in there cannot be answered until it's actually played all the way through. It isn't
link |
00:26:40.720
necessarily that there's just one, there can be many. So what you have is you have this medium,
link |
00:26:44.320
this, this undifferentiated, I'm sure there's a there's a psychological version of this somewhere
link |
00:26:49.280
that I don't know the proper terminology. But you have this, you have this list, like the ocean of
link |
00:26:53.280
potentiality, you have these 1000s of cells, and some number of individuals are going to be formed
link |
00:26:58.960
out of it, usually one, sometimes zero, sometimes several. And they form out of these cells,
link |
00:27:05.040
because a region of these cells organizes into a collective that will have goals, goals that
link |
00:27:10.880
individual cells don't have, for example, make a limb, make an eye, how many eyes? Well, exactly
link |
00:27:15.680
two. So individual cells don't know what an eye is, they don't know how many eyes you're supposed
link |
00:27:19.120
to have, but the collective does. The collective has goals and memories and anticipations that the
link |
00:27:23.360
individual cells don't. And that that the establishment of that boundary with its own
link |
00:27:27.440
ability to maintain to to pursue certain goals. That's the origin of selfhood.
link |
00:27:33.920
But I, is that goal in there somewhere? Were they always destined? Like, are they discovering
link |
00:27:42.800
that goal? Like, where the hell did evolution discover this when you went from the prokaryotes
link |
00:27:49.360
to eukaryotic cells? And then they started making groups. And when you make a certain group,
link |
00:27:55.600
you make a, you make it sound, and it's such a tricky thing to try to understand, you make it
link |
00:28:03.600
sound like this cells didn't get together and came up with a goal. But the very act of them
link |
00:28:09.680
getting together revealed the goal that was always there. There was always that potential
link |
00:28:16.880
for that goal. So the first thing to say is that there are way more questions here than
link |
00:28:20.880
certainties. Okay, so everything I'm telling you is cutting edge developing, you know, stuff. So
link |
00:28:25.680
it's not as if any of us know the answer to this. But, but here's, here's, here's my opinion on
link |
00:28:29.520
this. I think what evolution, I don't think that evolution produces solutions to specific problems,
link |
00:28:36.000
in other words, specific environments, like here's a frog that can live well in a froggy
link |
00:28:39.680
environment. I think what evolution produces is problem solving machines that that will that will
link |
00:28:46.000
solve problems in different spaces. So not just three dimensional spaces, but in a way,
link |
00:28:50.320
three dimensional space. This goes back to what we were talking about before we the brain is a
link |
00:28:55.120
evolutionarily a late development. It's a system that is able to net to pursue goals in three
link |
00:29:01.360
dimensional space by giving commands to muscles, where did that system come from that system
link |
00:29:05.040
evolved from a much more ancient, evolutionarily much more ancient system, where collections of
link |
00:29:10.000
cells gave instructions to for cell behaviors, meaning cells move to divide to die to change into
link |
00:29:18.320
cells to navigate more for space, the space of anatomies, the space of all possible anatomies.
link |
00:29:23.440
And before that, cells were navigating transcriptional space, which is a space of all
link |
00:29:27.520
possible gene expressions. And before that metabolic space. So what evolution has done,
link |
00:29:31.840
I think, is is is is produced hardware that is very good at navigating different spaces using a
link |
00:29:38.720
bag of tricks, right, which which I'm sure many of them we can steal for autonomous vehicles and
link |
00:29:42.560
robotics and various things. And what happens is that they navigate these spaces without a whole
link |
00:29:47.840
lot of commitment to what the space is. In fact, they don't know what the space is, right? We are
link |
00:29:51.520
all brains in a vat, so to speak. Every cell does not know, right? Every cell is some other name,
link |
00:29:57.280
some other cells external environment, right? So where does that with that border between you,
link |
00:30:02.160
you and the outside world, you don't really know where that is, right? Every every collection of
link |
00:30:05.680
cell has to figure that out from scratch. And the fact that evolution requires all of these things
link |
00:30:10.880
to figure out what they are, what effectors they have, what sensors they have, where does it make
link |
00:30:15.520
sense to draw a boundary between me and the outside world? The fact that you have to build all
link |
00:30:18.960
that from scratch, this autopoiesis is what defines the border of a self. Now, biology uses like a
link |
00:30:26.320
multi a multi scale competency architecture, meaning that every level has goals. So so
link |
00:30:31.760
molecular networks have goals, cells have goals, tissues, organs, colonies. And and it's the
link |
00:30:38.160
interplay of all of those that that enable biology to solve problems in new ways, for example, in
link |
00:30:43.280
xenobots and various other things. This is, you know, it's exactly as you said, in many ways,
link |
00:30:50.640
the cells are discovering new ways of being. But at the same time, evolution certainly shapes all
link |
00:30:56.080
this. So so evolution is very good at this agential bioengineering, right? When evolution
link |
00:31:01.680
is discovering a new way of being an animal, you know, an animal or a plant or something,
link |
00:31:06.000
sometimes it's by changing the hardware, you know, protein, changing proteins, protein structure,
link |
00:31:10.160
and so on. But much of the time, it's not by changing the hardware, it's by changing the
link |
00:31:14.160
signals that the cells give to each other. It's doing what we as engineers do, which is try to
link |
00:31:17.840
convince the cells to do various things by using signals, experiences, stimuli. That's what biology
link |
00:31:22.640
does. It has to, because it's not dealing with a blank slate. Every time as you know, if you're
link |
00:31:27.360
evolution, and you're trying to make make a make an organism, you're not dealing with a passive
link |
00:31:32.960
material that is fresh, and you have to specify it already wants to do certain things. So the easiest
link |
00:31:37.760
way to do that search to find whatever is going to be adaptive, is to find the signals that are
link |
00:31:42.560
going to convince cells to do various things, right? Your sense is that evolution operates
link |
00:31:48.480
both in the software and the hardware. And it's just easier, more efficient to operate in the
link |
00:31:54.000
software. Yes. And I should also say, I don't think the distinction is sharp. In other words,
link |
00:31:58.800
I think it's a continuum. But I think we can but I think it's a meaningful distinction where you can
link |
00:32:03.120
make changes to a particular protein, and now the enzymatic function is different, and it metabolizes
link |
00:32:08.480
differently, and whatever, and that will have implications for fitness. Or you can change the
link |
00:32:14.080
huge amount of information in the genome that isn't structural at all. It's, it's, it's signaling,
link |
00:32:20.480
it's when and how do cells say certain things to each other. And that can have massive changes,
link |
00:32:25.120
as far as how it's going to solve problems. I mean, this idea of multi hierarchical
link |
00:32:29.120
competence architecture, which is incredible to think about. So this hierarchy that evolution
link |
00:32:35.760
builds, I don't know who's responsible for this. I also see the incompetence of bureaucracies
link |
00:32:43.840
of humans when they get together. So how the hell does evolution build this, where at every level,
link |
00:32:53.200
only the best get to stick around, they somehow figure out how to do their job without knowing
link |
00:32:57.360
the bigger picture. And then there's like the bosses that do the bigger thing somehow, or that
link |
00:33:04.160
you can now abstract away the small group of cells as a as an organ or something. And then
link |
00:33:11.040
that organ does something bigger in the context of the full body or something like this.
link |
00:33:17.920
How is that built? Is there some intuition you can kind of provide of how that's constructed,
link |
00:33:23.680
that that hierarchical competence architecture? I love that competence,
link |
00:33:29.680
just the word competence is pretty cool in this context, because everybody's good at their job.
link |
00:33:34.080
Yeah, no, it's really key. And the other nice thing about competency is that so my central
link |
00:33:39.280
belief in all of this is that engineering is the right perspective on all of this stuff,
link |
00:33:43.840
because it gets you away from subjective terms. You know, people talk about sentience and this
link |
00:33:50.480
and that those things very hard to define, or people argue about them philosophically.
link |
00:33:54.880
I think that engineering terms like competency, like, you know, pursuit of goals, right? All of
link |
00:34:02.080
these things are, are empirically incredibly useful, because you know, when you see it,
link |
00:34:06.400
and if it helps you build, right, if I if I can pick the right level, I say, this thing has,
link |
00:34:11.920
I believe this is x level of like, competency, I think it's like a thermostat, or I think it's
link |
00:34:17.200
like a better thermostat, or I think it's a, you know, various other kinds of, you know,
link |
00:34:22.800
many, many different kinds of complex systems. If that helps me to control and predict and build
link |
00:34:28.000
such systems, then that's all there is to say, there's no more philosophy to argue about. So I
link |
00:34:32.000
like competency in that way, because you can quantify, you could, you have to, in fact, you
link |
00:34:35.120
have to, you have to make a claim competent at what? And then, or if I say, if I tell you,
link |
00:34:38.640
it has a goal, the question is, what's the goal? And how do you know? And I say, well, because
link |
00:34:42.400
every time I deviated from this particular state, that's what it spends energy to get back to,
link |
00:34:46.320
that's the goal. And we can quantify it, and we can be objective about it. So so so the the,
link |
00:34:51.920
we're not used to thinking about this, I give a talk sometimes called Why don't robots get cancer,
link |
00:34:56.000
right? And the reason robots don't get cancer is because generally speaking, with a few exceptions,
link |
00:35:00.160
our architectures have been, you've got a bunch of dumb parts. And you hope that if you put them
link |
00:35:05.280
together, the the the overlying machine will have some intelligence and do something rather,
link |
00:35:09.520
right, but the individual parts don't don't care, they don't have an agenda. Biology isn't like
link |
00:35:13.280
that every level has an agenda. And the final outcome is the result of cooperation and competition,
link |
00:35:20.560
both within and across levels. So for example, during embryogenesis, your tissues and organs are
link |
00:35:25.360
competing with each other. And it's actually a really important part of development, there's a
link |
00:35:28.800
reason they compete with each other, they're not all just, you know, sort of helping each other,
link |
00:35:33.440
they're also competing for information for metabolic for limited metabolic constraints.
link |
00:35:38.560
But to get back to your your other point, which is, you know, which is which is the seems like
link |
00:35:43.440
really efficient and good and so on compared to some of our human efforts. We also have to keep
link |
00:35:48.800
in mind that what happens here is that each level bends the option space for the level beneath so
link |
00:35:56.320
that your parts basically they don't see the the geometry. So I'm using them. And I think I take
link |
00:36:03.920
I take this the seriously terminology from from like, from like relativity, right, where the space
link |
00:36:10.160
is literally bent. So the option space is deformed by the higher level so that the lower levels, all
link |
00:36:15.120
they really have to do is go down their concentration gradient, they don't have to,
link |
00:36:18.000
in fact, they don't, they can't know what the big picture is. But if you bend the space just right,
link |
00:36:22.320
if they do what locally seems right, they end up doing your bidding, they end up doing things that
link |
00:36:26.720
are optimal in the in the higher space. Conversely, because the components are good at getting their
link |
00:36:33.840
job done, you as the higher level don't need to try to compute all the low level controls,
link |
00:36:38.880
all you're doing is bending the space, you don't know or care how they're going to do it.
link |
00:36:42.160
Give you a super simple example in the in the tappel, we found that okay, so so tappels need
link |
00:36:47.680
to become frogs and to become to go from a tappel head to a frog head, you have to rearrange the
link |
00:36:51.600
face. So the eyes have to move forward, the jaws have to come out the nostrils move like everything
link |
00:36:55.200
moves. It used to be thought that because all tappels look the same, and all frogs look the
link |
00:36:59.840
same. If you just remember, if every piece just moves in the right direction, the right amount,
link |
00:37:03.200
then you get your you get your fraud. Right. So we decided to test we I have this hypothesis that I
link |
00:37:08.000
thought I thought actually, the system is probably more intelligent than that. So what did we do?
link |
00:37:11.600
We made what we call Picasso tappels. So these are so everything is scrambled. So the eyes are on the
link |
00:37:15.920
back of the head, the jaws are off to the side, everything is scrambled. Well, guess what they
link |
00:37:18.960
make, they make pretty normal frogs, because all the different things move around in novel
link |
00:37:23.600
paths configurations until they get to the correct froggy sort of frog face configuration,
link |
00:37:28.240
then they stop. So, so the thing about that is now imagine evolution, right? So, so you make some
link |
00:37:34.080
sort of mutation, and it does, like every mutation, it does many things. So something good comes of it,
link |
00:37:40.560
but also it moves your mouth off to the side, right? Now, if if there wasn't this multi scale
link |
00:37:46.160
competency, you can see where this is going, if there wasn't this multi scale competency,
link |
00:37:49.360
the organism would be dead, your fitness is zero, because you can't eat. And you would never get to
link |
00:37:53.200
explore the other beneficial consequences of that mutation, you'd have to wait until you find some
link |
00:37:57.680
other way of doing it without moving the mouth, that's really hard. So, so the fitness landscape
link |
00:38:01.840
would be incredibly rugged evolution would take forever. The reason it works, one of the reasons
link |
00:38:06.320
it works so well, is because you do that, no worries, the mouth will find its way where where
link |
00:38:11.360
it belongs, right? So now you get to explore. So what that means is that all of these mutations
link |
00:38:15.680
that otherwise would be deleterious are now neutral, because the competency of the parts
link |
00:38:21.280
make up for all kinds of things. So all the noise of development, all the variability in the
link |
00:38:26.480
environment, all these things, the competency of the parts makes up for it. So the so so that's
link |
00:38:32.080
all that's all fantastic, right? That's all that's all great. The only other thing to remember when
link |
00:38:36.080
we compare this to human efforts is this. Every component has its own goals in various spaces,
link |
00:38:41.040
usually with very little regard for the welfare of the other levels. So so as a simple example,
link |
00:38:46.560
you know, you as a as a complex system, you will go out and you will do you know, jiu jitsu,
link |
00:38:52.000
or whatever, you'll have some go you have to go rock climbing, scrape a bunch of cells off your
link |
00:38:55.520
hands. And then you're happy as a system, right? You come back, and you've accomplished some goals,
link |
00:38:59.680
and you're really happy. Those cells are dead. They're gone. Right? Did you think about those
link |
00:39:03.120
cells? Not really, right? You had some you had some bruising out selfish SOB. That's it. And so
link |
00:39:08.640
and so that's the thing to remember is that, you know, and we know this from from history is that
link |
00:39:13.760
is that just being a collective isn't enough. Because what the goals of that collective will
link |
00:39:19.520
be relative to the welfare of the individual parts is a massively open and justify the means
link |
00:39:24.560
I'm telling you, Stalin was onto something. No, that's the danger. But we can exactly that's the
link |
00:39:29.600
danger of for us humans, we have to construct ethical systems under which we don't take seriously
link |
00:39:39.760
the full mechanism of biology and apply it to the way the world functions,
link |
00:39:43.840
which is which is an interesting line we've drawn. The world that built us is the one we
link |
00:39:51.680
reject in some sense, when we construct human societies, the idea that this country was founded
link |
00:39:59.120
on that all men are created equal. That's such a fascinating idea. That's like, you're fighting
link |
00:40:05.440
against nature and saying, well, there's something bigger here than a hierarchical competency
link |
00:40:14.640
architecture. But there's so many interesting things you said. So from an algorithmic perspective,
link |
00:40:21.920
the act of bending the option space. That's really, that's really profound. Because if you
link |
00:40:29.840
look at the way AI systems are built today, there's a big system, like I said, with robots,
link |
00:40:36.800
and as a goal, and he gets better and better at optimizing that goal at accomplishing that goal.
link |
00:40:42.080
But if biology built a hierarchical system where everything is doing computation,
link |
00:40:49.360
and everything is accomplishing the goal, not only that, it's kind of dumb,
link |
00:40:56.400
you know, with the limited with a bent option space is just doing the thing that's the easiest
link |
00:41:03.360
thing for in some sense. And somehow that allows you to have turtles on top of turtles,
link |
00:41:10.960
literally dumb systems on top of dumb systems that as a whole create something incredibly smart.
link |
00:41:18.480
Yeah, I mean, every system is has some degree of intelligence in its own problem domain. So,
link |
00:41:25.200
so cells will have problems they're trying to solve in physiological space and transcriptional
link |
00:41:30.400
space. And then I can give you some some cool examples of that. But the collective is trying
link |
00:41:34.240
to solve problems in anatomical space, right and forming a, you know, a creature and growing your
link |
00:41:38.800
blood vessels and so on. And then the collective the the the whole body is solving yet other
link |
00:41:44.480
problems, they may be in social space and linguistic space and three dimensional space.
link |
00:41:48.080
And who knows, you know, the group might be solving problems in, you know, I don't know,
link |
00:41:52.080
some sort of financial space or something. So one of the major differences with with most,
link |
00:41:59.280
with most AIs today is is a the kind of flatness of the architecture, but also of the fact that
link |
00:42:06.160
they're constructed from outside their their borders, and they're, you know, so a few. So,
link |
00:42:14.400
to a large extent, and of course, there are counter examples now, but but to a large extent,
link |
00:42:18.640
our technology has been such that you create a machine or a robot, it knows what its sensors are,
link |
00:42:23.760
it knows what its effectors are, it knows the boundary between it and the outside world,
link |
00:42:27.760
although this is given from the outside. Biology constructs this from scratch. Now the best example
link |
00:42:32.800
of this that that originally in robotics was actually Josh Bongard's work in 2006, where he
link |
00:42:38.880
made these, these robots that did not know their shape to start with. So like a baby, they sort of
link |
00:42:43.120
floundered around, they made some hypotheses, well, I did this, and I moved in this way. Well,
link |
00:42:47.040
maybe I'm a whatever, maybe I have wheels, or maybe I have six legs or whatever, right? And
link |
00:42:50.800
they would make a model and eventually will crawl around. So that's, I mean, that's really good.
link |
00:42:54.240
That's part of the autopoiesis, but we can go a step further. And some people are doing this. And
link |
00:42:58.160
then we're sort of working on some of this too, is this idea that let's even go back further,
link |
00:43:02.960
you don't even know what sensors you have, you don't know where you end in the outside world
link |
00:43:06.640
begins. All you have is is certain things like active inference, meaning you're trying to minimize
link |
00:43:11.200
surprise, right? You have some metabolic constraints, you don't have all the energy you
link |
00:43:14.880
need, you don't have all the time in the world to think about everything you want to think about. So
link |
00:43:18.640
that means that you can't afford to be a micro reductionist, you know, all this data coming in,
link |
00:43:23.280
you have to course grain it and say, I'm gonna take all this stuff, and I'm gonna call that a
link |
00:43:26.560
cat. I'm gonna take all this, I'm gonna call that the edge of the table I don't want to fall off of.
link |
00:43:30.560
And I don't want to know anything about the micro states, what I want to know is what is the optimal
link |
00:43:34.480
way to cut up my world. And by the way, this thing over here, that's me. And the reason that's me is
link |
00:43:38.560
because I have more control over this than I have over any of this other stuff. And so now you can
link |
00:43:42.640
begin to write. So that's self construction at that, that figuring out making models of the
link |
00:43:46.560
outside world, and then turning that inwards, and starting to make a model of yourself, right, which
link |
00:43:51.120
immediately starts to get into issues of agency and control. Because in order to if you are under
link |
00:43:58.560
metabolic constraints, meaning you don't have the energy, right, that all the energy in the world,
link |
00:44:02.240
you have to be efficient, that immediately forces you to start telling stories about coarse grained
link |
00:44:08.000
agents that do things, right, you don't have the energy to like Laplace's demon, you know,
link |
00:44:11.840
calculate every, every possible state that's going to happen, you have to you have to course grain,
link |
00:44:17.360
and you have to say, that is the kind of creature that does things, either things that I avoid,
link |
00:44:21.920
or things that I will go towards, that's a major food or whatever, whatever it's going to be.
link |
00:44:25.280
And so right at the base of simple, very simple organisms starting to make
link |
00:44:31.920
models of agents doing things, that is the origin of models of free will, basically, right, because
link |
00:44:39.040
you see the world around you as having agency. And then you turn that on yourself. And you say,
link |
00:44:42.880
wait, I have agency too, I can I do things, right. And and then you make decisions about what you're
link |
00:44:47.440
going to do. So all of this one one model is to view all of those kinds of things as
link |
00:44:53.920
being driven by that early need to determine what you are and to do so and to then take
link |
00:44:59.600
actions in the most energetically efficient space possible. Right. So free will emerges
link |
00:45:04.800
when you try to simplify, tell a nice narrative about your environment. I think that's very
link |
00:45:10.000
plausible. Yeah. You think free was an illusion. So you're kind of implying that it's a useful hack.
link |
00:45:19.360
Well, I'll say two things. The first thing is, I think I think it's very plausible to say that
link |
00:45:24.320
any organism that self or any agent that self whether it's biological or not, any agent that
link |
00:45:30.560
self constructs under energy constraints, is going to believe in free will, we'll get to whether it
link |
00:45:36.960
has free will momentarily. But but I think but I think what what it definitely drives is a view of
link |
00:45:41.200
yourself and the outside world as an agential view, I think that's inescapable. So that's true
link |
00:45:45.360
for even primitive organisms? I think so. I think that's now now they don't have now obviously,
link |
00:45:50.480
you have to scale down, right. So so so so they don't have the kinds of complex metacognition
link |
00:45:55.360
that we have. So they can do long term planning and thinking about free will and so on and so on.
link |
00:45:59.520
But but the sense of agency is really useful to accomplish tasks simple or complicated. That's
link |
00:46:05.040
right. In all kinds of spaces, not just in obvious three dimensional space. I mean, we're very good
link |
00:46:09.680
that the thing is, humans are very good at detecting agency of like medium sized objects
link |
00:46:16.720
moving at medium speeds in the three dimensional world, right? We see a bowling ball and we see a
link |
00:46:20.560
mouse and we immediately know what the difference is, right? And how we're going to mostly things
link |
00:46:23.920
you can eat or get eaten by. Yeah, yeah. That's our that's our training set, right? From the time
link |
00:46:28.400
you're little, your training set is visual data on on this this like little chunk of your experience.
link |
00:46:33.120
But imagine if imagine if from the time that we were born, we had innate senses of your blood
link |
00:46:39.120
chemistry, if you could feel your blood chemistry, the way you can see, right, you had a high bandwidth
link |
00:46:42.960
connection, and you could feel your blood chemistry, and you could see, you could sense all
link |
00:46:46.640
the things that your organs were doing. So your pancreas, your liver, all the things. If we had
link |
00:46:51.040
that you we would be very good at detecting intelligence and physiological space, we would
link |
00:46:55.760
know the level of intelligence that our various organs were deploying to deal with things that
link |
00:47:00.320
were coming to anticipate the stimuli to, you know, but but we're just terrible at that. We
link |
00:47:04.400
don't, in fact, in fact, people don't even, you know, you talk about intelligence that these are
link |
00:47:07.920
the paper spaces. And a lot of people think that's just crazy, because, because all we're all we know
link |
00:47:12.160
is motion. We do have access to that information. So it's actually possible that so evolution could
link |
00:47:18.880
if we wanted to construct an organism that's able to perceive the flow of blood through your body,
link |
00:47:24.400
the way you see an old friend and say, yo, what's up? How's the wife and the kids? In that same way,
link |
00:47:32.560
you would see that you would feel like a connection to the liver. Yeah, yeah, I think,
link |
00:47:37.920
you know, maybe other people's liver and not just your own, because you don't have access to other
link |
00:47:41.680
people's. Not yet. But you could imagine some really interesting connection, right? But like
link |
00:47:46.160
sexual selection, like, oh, that girl's got a nice liver. Well, that's like, the way her blood flows,
link |
00:47:52.800
the dynamics of the blood is very interesting. It's novel. I've never seen one of those.
link |
00:47:58.320
But you know, that's exactly what we're trying to half ass when we, when we judge judgment of
link |
00:48:03.920
beauty by facial symmetry and so on. That's a half assed assessment of exactly that. Because
link |
00:48:09.120
if your cells could not cooperate enough to keep your organism symmetrical, you know,
link |
00:48:13.760
you can make some inferences about what else is wrong, right? Like that's a very, you know,
link |
00:48:17.120
that's a very basic. Interesting. Yeah. So that in some deep sense, actually, that is what we're
link |
00:48:23.280
doing. We're trying to infer how health, we use the word healthy, but basically, how functional
link |
00:48:33.120
is this biological system I'm looking at so I can hook up with that one and make offspring? Yeah,
link |
00:48:41.120
yeah. Well, what kind of hardware might their genomics give me that that might be useful in
link |
00:48:45.360
the future? I wonder why evolution didn't give us a higher resolution signal. Like why the whole
link |
00:48:50.720
peacock thing with the feathers? It doesn't seem, it's a very low bandwidth signal for
link |
00:48:58.160
sexual selection. I'm gonna, and I'm not an expert on this stuff, but on peacocks. Well,
link |
00:49:02.880
you know, but I'll take a stab at the reason. I think that it's because it's an arms race. You
link |
00:49:08.880
see, you don't want everybody to know everything about you. So I think that as much as, as much as,
link |
00:49:14.160
and in fact, there's another interesting part of this arms race, which is, if you think about this,
link |
00:49:21.120
the most adaptive, evolvable system is one that has the most level of top down control, right?
link |
00:49:27.760
If it's really easy to say to a bunch of cells, make another finger versus, okay, here's 10,000
link |
00:49:33.920
gene expression changes that you need to do to make it to change your finger, right? The system
link |
00:49:38.800
with good top down control that has memory and when we need to get back to that, by the way,
link |
00:49:42.320
that's a question I neglected to answer about where the memory is and so on. A system that uses
link |
00:49:48.080
all of that is really highly evolvable and that's fantastic. But guess what? It's also highly subject
link |
00:49:53.920
to hijacking by parasites, by cheaters of various kinds, by conspecifics. Like we found that,
link |
00:50:01.440
and then that goes back to the story of the pattern memory in these planaria,
link |
00:50:04.880
there's a bacterium that lives on these planaria. That bacterium has an input into how many heads
link |
00:50:09.840
the worm is going to have because it's hijacks that control system and it's able to make a
link |
00:50:14.480
chemical that basically interfaces with the system that calculates how many heads you're
link |
00:50:18.480
supposed to have and they can make them have two heads. And so you can imagine that if you
link |
00:50:22.080
are two, so you want to be understandable for your own parts to understand each other,
link |
00:50:25.520
but you don't want to be too understandable because you'll be too easily controllable.
link |
00:50:28.880
And so I think that my guess is that that opposing pressure keeps us from being a super high
link |
00:50:36.640
bandwidth kind of thing where we can just look at somebody and know everything about them.
link |
00:50:40.240
So it's a kind of biological game of Texas hold them. You're showing some cards and you're hiding
link |
00:50:45.520
other cards and there's part of it and there's bluffing and there's all that. And then there's
link |
00:50:50.560
probably whole species that would do way too much bluffing. That's probably where peacocks fall.
link |
00:50:56.800
There's a book that I don't remember if I read or if I read summaries of the book,
link |
00:51:04.400
but it's about evolution of beauty and birds. Where is that from? Is that a book or does
link |
00:51:10.160
Richard Dawkins talk about it? But basically there's some species start to like over select
link |
00:51:15.600
for beauty, not over select. They just some reason select for beauty. There is a case to be made.
link |
00:51:21.280
Actually now I'm starting to remember, I think Darwin himself made a case that you can select
link |
00:51:27.200
based on beauty alone. There's a point where beauty doesn't represent some underlying biological
link |
00:51:35.680
truth. You start to select for beauty itself. And I think the deep question is there some evolutionary
link |
00:51:44.400
value to beauty, but it's an interesting kind of thought that can we deviate completely from
link |
00:51:53.760
the deep biological truth to actually appreciate some kind of the summarization in itself.
link |
00:52:00.480
Let me get back to memory because this is a really interesting idea. How do a collection of cells
link |
00:52:07.600
remember anything? How do biological systems remember anything? How is that akin to the kind
link |
00:52:13.520
of memory we think of humans as having within our big cognitive engine?
link |
00:52:17.920
Yeah. One of the ways to start thinking about bioelectricity is to ask ourselves, where did
link |
00:52:25.200
neurons and all these cool tricks that the brain uses to run these amazing problem solving abilities
link |
00:52:32.320
on and basically an electrical network, right? Where did that come from? They didn't just evolve,
link |
00:52:36.400
you know, appear out of nowhere. It must have evolved from something. And what it evolved from
link |
00:52:40.720
was a much more ancient ability of cells to form networks to solve other kinds of problems. For
link |
00:52:46.320
example, to navigate more for space to control the body shape. And so all of the components
link |
00:52:52.320
of neurons, so ion channels, neurotransmitter machinery, electrical synapses, all this stuff
link |
00:52:58.320
is way older than brains, way older than neurons, in fact, older than multicellularity. And so
link |
00:53:03.600
it was already that even bacterial biofilms, there's some beautiful work from UCSD on brain
link |
00:53:09.120
like dynamics and bacterial biofilms. So evolution figured out very early on that electrical networks
link |
00:53:14.880
are amazing at having memories, at integrating information across distance, at different kinds
link |
00:53:19.120
of optimization tasks, you know, image recognition and so on, long before there were brains.
link |
00:53:24.400
Can you actually just step back? We'll return to it. What is bioelectricity? What is biochemistry?
link |
00:53:30.160
What is, what are electrical networks? I think a lot of the biology community focuses on
link |
00:53:36.160
the chemicals as the signaling mechanisms that make the whole thing work. You have, I think,
link |
00:53:47.200
to a large degree, uniquely, maybe you can correct me on that, have focused on the bioelectricity,
link |
00:53:53.600
which is using electricity for signaling. There's also probably mechanical. Sure, sure. Like knocking
link |
00:54:00.080
on the door. So what's the difference? And what's an electrical network? Yeah, so I want to make
link |
00:54:07.840
sure and kind of give credit where credit is due. So as far back as 1903, and probably late 1800s
link |
00:54:14.800
already, people were thinking about the importance of electrical phenomena in life. So I'm for sure
link |
00:54:20.560
not the first person to stress the importance of electricity. People, there were waves of research
link |
00:54:25.920
in the in the 30s, in the 40s, and then, again, in the kind of 70s, 80s, and 90s of sort of the
link |
00:54:33.600
pioneers of bioelectricity, who did some amazing work on all this, I think, I think what what
link |
00:54:37.520
we've done that's new, is to step away from this idea that, and I'll describe what what the
link |
00:54:43.040
bioelectricity is a step away from the idea that, well, here's another piece of physics that you
link |
00:54:46.800
need to keep track of to understand physiology and development. And to really start looking at this
link |
00:54:51.760
as saying, no, this is a privileged computational layer that gives you access to the actual
link |
00:54:57.360
cognition of the tissue of basal cognition. So, so merging that that developmental biophysics with
link |
00:55:02.160
ideas and cognition of computation, and so on, I think I think that's what we've done that's new.
link |
00:55:05.920
But people have been talking about bioelectricity for a really long time. And so I'll, so I'll
link |
00:55:09.840
define that. So what happens is that if you have, if you have a single cell, cell has a membrane,
link |
00:55:16.400
in that membrane are proteins called ion channels, and those proteins allow charged molecules,
link |
00:55:21.600
potassium, sodium, chloride, to go in and out under certain circumstances. And when there's
link |
00:55:27.280
an imbalance of of those ions, there becomes a voltage gradient across that membrane. And so
link |
00:55:33.200
all cells, all living cells try to hold a particular kind of voltage difference across
link |
00:55:38.720
the membrane, and they spend a lot of energy to do so. When you now now, so that's it, that's it,
link |
00:55:44.240
that's a single cell. When you have multiple cells, the cells sitting next to each other,
link |
00:55:48.720
they can communicate their voltage state to each other via a number of different ways. But one of
link |
00:55:53.200
them is this thing called a gap junction, which is basically like a little submarine hatch that
link |
00:55:56.880
just kind of docks, right? And the ions from one side can flow to the other side, and vice versa.
link |
00:56:02.160
So...
link |
00:56:02.720
Isn't it incredible that this evolved? Isn't that wild? Because that didn't exist.
link |
00:56:09.600
Correct. This had to be, this had to be evolved.
link |
00:56:11.440
It had to be invented.
link |
00:56:12.640
That's right.
link |
00:56:13.280
Somebody invented electricity in the ocean. When did this get invented?
link |
00:56:17.440
Yeah. So, I mean, it is incredible. The guy who discovered gap junctions,
link |
00:56:22.800
Werner Loewenstein, I visited him. He was really old.
link |
00:56:25.360
A human being?
link |
00:56:26.720
He discovered them.
link |
00:56:27.360
Because who really discovered them lived probably four billion years ago.
link |
00:56:32.480
Good point.
link |
00:56:32.880
So you give credit where credit is due, I'm just saying.
link |
00:56:35.600
He rediscovered gap junctions. But when I visited him in Woods Hole, maybe 20 years ago now,
link |
00:56:43.200
he told me that he was writing, and unfortunately, he passed away, and I think this book never got
link |
00:56:47.840
written. He was writing a book on gap junctions and consciousness. And I think it would have been
link |
00:56:52.800
an incredible book, because gap junctions are magic. I'll explain why in a minute.
link |
00:56:57.920
What happens is that, just imagine, the thing about both these ion channels and these gap
link |
00:57:02.720
junctions is that many of them are themselves voltage sensitive. So that's a voltage sensitive
link |
00:57:08.880
current conductance. That's a transistor. And as soon as you've invented one, immediately,
link |
00:57:13.600
you now get access to, from this platonic space of mathematical truths, you get access to all of the
link |
00:57:20.240
cool things that transistors do. So now, when you have a network of cells, not only do they talk to
link |
00:57:26.000
each other, but they can send messages to each other, and the differences of voltage can propagate.
link |
00:57:30.160
Now, to neuroscientists, this is old hat, because you see this in the brain, right? This action
link |
00:57:34.000
potentials, the electricity. They have these awesome movies where you can take a zebra,
link |
00:57:40.000
like a transparent animal, like a zebrafish, and you can literally look down, and you can see all
link |
00:57:45.040
the firings as the fish is making decisions about what to eat and things like this. It's amazing.
link |
00:57:49.120
Well, your whole body is doing that all the time, just much slower. So there are very few things
link |
00:57:54.160
that neurons do that all the cells in your body don't do. They all do very similar things, just
link |
00:57:59.360
on a much slower timescale. And whereas your brain is thinking about how to solve problems in
link |
00:58:04.320
three dimensional space, the cells in an embryo are thinking about how to solve problems in
link |
00:58:08.880
anatomical space. They're trying to have memories like, hey, how many fingers are we supposed to
link |
00:58:12.240
have? Well, how many do we have now? What do we do to get from here to there? That's the kind of
link |
00:58:15.840
problems they're thinking about. And the reason that gap junctions are magic is, imagine, right,
link |
00:58:20.720
from the earliest time. Here are two cells. This cell, how can they communicate? Well,
link |
00:58:29.360
the simple version is this cell could send a chemical signal, it floats over, and it hits
link |
00:58:34.800
a receptor on this cell, right? Because it comes from outside, this cell can very easily tell that
link |
00:58:39.200
that came from outside. Whatever information is coming, that's not my information. That information
link |
00:58:44.240
is coming from the outside. So I can trust it, I can ignore it, I can do various things with it,
link |
00:58:48.640
I can do various things with it, whatever, but I know it comes from the outside. Now imagine
link |
00:58:52.160
instead that you have two cells with a gap junction between them. Something happens,
link |
00:58:55.360
let's say this cell gets poked, there's a calcium spike, the calcium spike or whatever small
link |
00:58:59.760
molecule signal propagates through the gap junction to this cell. There's no ownership
link |
00:59:04.400
metadata on that signal. This cell does not know now that it came from outside because it looks
link |
00:59:10.000
exactly like its own memories would have looked like of whatever had happened, right? So gap
link |
00:59:15.200
junctions to some extent wipe ownership information on data, which means that if I can't, if you and
link |
00:59:21.440
I are sharing memories and we can't quite tell who the memories belong to, that's the beginning of a
link |
00:59:26.320
mind melt. That's the beginning of a scale up of cognition from here's me and here's you to no,
link |
00:59:31.840
now there's just us. So they enforce a collective intelligence gap junctions. That's right. It
link |
00:59:36.640
helps. It's the beginning. It's not the whole story by any means, but it's the start.
link |
00:59:39.680
Where's state stored of the system? Is it in part in the gap junctions themselves? Is it in the
link |
00:59:48.240
cells? There are many, many layers to this as always in biology. So there are chemical networks.
link |
00:59:55.360
So for example, gene regulatory networks, right? Which, or basically any kind of chemical pathway
link |
01:00:00.320
where different chemicals activate and repress each other, they can store memories. So in a
link |
01:00:04.480
dynamical system sense, they can store memories. They can get into stable states that are hard to
link |
01:00:09.120
pull them out of. So that becomes, once they get in, that's a memory, a permanent memory or a
link |
01:00:13.200
semi permanent memory of something that's happened. There are cytoskeletal structures that are
link |
01:00:17.760
physically, they store memories in physical configuration. There are electrical memories
link |
01:00:24.640
like flip flops where there is no physical. So if you look, I showed my students this example
link |
01:00:30.560
as a flip flop. And the reason that it stores a zero one is not because some piece of the hardware
link |
01:00:37.920
moved. It's because there's a cycling of the current in one side of the thing. If I come over
link |
01:00:42.880
and I hold the other side to a high voltage for a brief period of time, it flips over and now it's
link |
01:00:50.080
here. But none of the hardware moved. The information is in a stable dynamical sense. And
link |
01:00:54.880
if you were to x ray the thing, you couldn't tell me if it was zero or one, because all you would
link |
01:00:58.560
see is where the hardware is. You wouldn't see the energetic state of the system. So there are
link |
01:01:03.120
bioelectrical states that are held in that exact way, like volatile ram basically, like in the
link |
01:01:09.680
electrical state. It's very akin to the different ways that memory is stored in a computer.
link |
01:01:15.840
So there's ram, there's hard drive. You can make that mapping, right? So I think the interesting
link |
01:01:21.120
thing is that based on the biology, we can have a more sophisticated, you know, I think we can
link |
01:01:26.960
revise some of our computer engineering methods because there are some interesting things that
link |
01:01:32.560
biology we haven't done yet. But that mapping is not bad. I mean, I think it works in many ways.
link |
01:01:38.400
Yeah, I wonder because I mean, the way we build computers at the root of computer science is the
link |
01:01:43.280
idea of proof of correctness. We program things to be perfect, reliable. You know, this idea of
link |
01:01:52.240
resilience and robustness to unknown conditions is not as important. So that's what biology is really
link |
01:01:58.240
good at. So I don't know what kind of systems. I don't know how we go from a computer to a
link |
01:02:04.000
biological system in the future. Yeah, I think that, you know, the thing about biology is all
link |
01:02:10.480
about making really important decisions really quickly on very limited information. I mean,
link |
01:02:15.280
that's what biology is all about. You have to act, you have to act now. The stakes are very high,
link |
01:02:19.600
and you don't know most of what you need to know to be perfect. And so there's not even an attempt
link |
01:02:24.080
to be perfect or to get it right in any sense. There are just things like active inference,
link |
01:02:29.920
minimize surprise, optimize some efficiency and some things like this that guides the whole
link |
01:02:37.120
business. I mentioned too offline that somebody who's a fan of your work is Andre Kapathy.
link |
01:02:44.640
And he's, amongst many things, also writes occasionally a great blog. He came up with
link |
01:02:52.720
this idea, I don't know if he coined the term, but of software 2.0, where the programming is
link |
01:03:00.720
done in the space of configuring these artificial neural networks. Is there some sense in which that
link |
01:03:08.240
would be the future of programming for us humans, where we're less doing like Python like programming
link |
01:03:16.400
and more... How would that look like? But basically doing the hyperparameters of something
link |
01:03:25.680
akin to a biological system and watching it go and adjusting it and creating some kind of feedback
link |
01:03:33.360
loop within the system so it corrects itself. And then we watch it over time accomplish the goals
link |
01:03:40.800
we want it to accomplish. Is that kind of the dream of the dogs that you described in the Nature
link |
01:03:46.880
paper? Yeah. I mean, that's what you just painted is a very good description of our efforts at
link |
01:03:54.960
regenerative medicine as a kind of somatic psychiatry. So the idea is that you're not trying
link |
01:04:01.040
to micromanage. I mean, think about the limitations of a lot of the medicines today. We try to
link |
01:04:07.920
interact down at the level of pathways. So we're trying to micromanage it. What's the problem? Well,
link |
01:04:14.560
one problem is that for almost every medicine other than antibiotics, once you stop it, the
link |
01:04:20.800
problem comes right back. You haven't fixed anything. You were addressing symptoms. You
link |
01:04:23.680
weren't actually curing anything, again, except for antibiotics. That's one problem. The other
link |
01:04:28.560
problem is you have massive amount of side effects because you were trying to interact at the lowest
link |
01:04:33.600
level. It's like, I'm going to try to program this computer by changing the melting point of
link |
01:04:40.400
copper. Maybe you can do things that way, but my God, it's hard to program at the hardware level.
link |
01:04:46.640
So what I think we're starting to understand is that, and by the way, this goes back to what you
link |
01:04:53.360
were saying before about that we could have access to our internal state. So people who practice that
link |
01:04:58.800
kind of stuff, so yoga and biofeedback and those, those are all the people that uniformly will say
link |
01:05:04.000
things like, well, the body has an intelligence and this and that. Those two sets overlap perfectly
link |
01:05:08.480
because that's exactly right. Because once you start thinking about it that way, you realize that
link |
01:05:13.600
the better locus of control is not always at the lowest level. This is why we don't all program
link |
01:05:18.480
with a soldering iron. We take advantage of the high level intelligences that are there,
link |
01:05:24.720
intelligences that are there, which means trying to figure out, okay, which of your tissues can
link |
01:05:28.960
learn? What can they learn? Why is it that certain drugs stop working after you take them for a while
link |
01:05:35.200
with this habituation, right? And so can we understand habituation, sensitization, associative
link |
01:05:40.160
learning, these kinds of things in chemical pathways? We're going to have a completely
link |
01:05:44.400
different way. I think we're going to have a completely different way of using drugs and of
link |
01:05:49.200
medicine in general when we start focusing on the goal states and on the intelligence of our
link |
01:05:54.560
subsystems as opposed to treating everything as if the only path was micromanagement from
link |
01:05:59.040
chemistry upwards. Well, can you speak to this idea of somatic psychiatry? What are somatic cells?
link |
01:06:05.200
How do they form networks that use bioelectricity to have memory and all those kinds of things?
link |
01:06:11.760
Yeah. What are somatic cells like basics here? Somatic cells just means the cells of your body.
link |
01:06:16.160
Soma just means body, right? So somatic cells are just the... I'm not even specifically making a
link |
01:06:20.000
distinction between somatic cells and stem cells or anything like that. I mean, basically all the
link |
01:06:23.920
cells in your body, not just neurons, but all the cells in your body. They form electrical
link |
01:06:28.400
networks during embryogenesis, during regeneration. What those networks are doing
link |
01:06:33.280
in part is processing information about what our current shape is and what the goal shape is.
link |
01:06:39.600
Now, how do I know this? Because I can give you a couple of examples. One example is when we started
link |
01:06:45.120
studying this, we said, okay, here's a planarian. A planarian is a flatworm. It has one head and one
link |
01:06:50.400
tail normally. And the amazing... There's several amazing things about planaria, but basically they
link |
01:06:55.200
kind of... I think planaria hold the answer to pretty much every deep question of life.
link |
01:07:00.960
For one thing, they're similar to our ancestors. So they have true symmetry. They have a true
link |
01:07:04.960
brain. They're not like earthworms. They're a much more advanced life form. They have lots
link |
01:07:08.320
of different internal organs, but they're these little... They're about maybe two centimeters in
link |
01:07:12.240
the centimeter to two in size. They have a head and a tail. And the first thing is planaria are
link |
01:07:17.680
immortal. So they do not age. There's no such thing as an old planarian. So that right there
link |
01:07:22.320
tells you that these theories of thermodynamic limitations on lifespan are wrong. It's not that
link |
01:07:27.680
well over time everything degrades. No, planaria can keep it going for probably how long have
link |
01:07:33.280
they been around 400 million years. So the planaria in our lab are actually in physical
link |
01:07:38.960
continuity with planaria that were here 400 million years ago. So there's planaria that
link |
01:07:43.600
have lived that long essentially. What does it mean physical continuity? Because what they do
link |
01:07:49.280
is they split in half. The way they reproduce is they split in half. So the planaria, the back end
link |
01:07:54.560
grabs the petri dish, the front end takes off and they rip themselves in half. But isn't it some
link |
01:07:59.680
sense where like you are a physical continuation? Yes, except that we go through a bottleneck of one
link |
01:08:07.600
cell, which is the egg. They do not. I mean, they can. There's certain planaria. Got it. So we go
link |
01:08:11.760
through a very ruthless compression process and they don't. Yes. Like an auto encoder, you know,
link |
01:08:17.200
sort of squashed down to one cell and then back out. These guys just tear themselves in half.
link |
01:08:22.880
And so the other amazing thing about them is they regenerate. So you can cut them into pieces.
link |
01:08:26.640
The record is, I think, 276 or something like that by Thomas Hunt Morgan. And each piece regrows a
link |
01:08:32.560
perfect little worm. They know exactly, every piece knows exactly what's missing, what needs
link |
01:08:36.960
to happen. In fact, if you chop it in half, as it grows the other half, the original tissue shrinks
link |
01:08:45.360
so that when the new tiny head shows up, they're proportional. So it keeps perfect proportion.
link |
01:08:50.080
If you starve them, they shrink. If you feed them again, they expand. Their control,
link |
01:08:54.160
their anatomical control is just insane. Somebody cut them into over 200 pieces.
link |
01:08:58.960
Yeah. Thomas Hunt Morgan did. Hashtag science. Amazing. And maybe more. I mean,
link |
01:09:03.520
they didn't have antibiotics back then. I bet he lost some due to infection. I bet it's
link |
01:09:06.720
actually more than that. I bet you could do more than that. Humans can't do that.
link |
01:09:11.760
Well, yes. I mean, again, true, except that... Maybe you can at the embryonic level.
link |
01:09:16.960
Well, that's the thing, right? So when I talk about this, I say, just remember that
link |
01:09:21.120
as amazing as it is to grow a whole planarian from a tiny fragment,
link |
01:09:24.880
half of the human population can grow a full body from one cell. So development is really,
link |
01:09:30.640
you can look at development as just an example of regeneration.
link |
01:09:34.240
Yeah. To think, we'll talk about regenerative medicine, but there's some sense of what would
link |
01:09:39.600
be like that warm in like 500 years where I can just go regrow a hand.
link |
01:09:46.320
Yep. With given time, it takes time to grow large things.
link |
01:09:49.920
For now.
link |
01:09:50.560
Yeah, I think so. I think.
link |
01:09:51.840
You can probably... Why not accelerate? Oh, biology takes its time?
link |
01:09:56.800
I'm not going to say anything is impossible, but I don't know of a way to accelerate these
link |
01:10:00.080
processes. I think it's possible. I think we are going to be regenerative, but I don't know of a
link |
01:10:04.000
way to make it faster.
link |
01:10:04.800
I could just think people from a few centuries from now would be like, well, they used to have
link |
01:10:10.080
to wait a week for the hand to regrow. It's like when the microwave was invented. You can toast
link |
01:10:17.920
your... What's that called when you put a cheese on a toast? It's delicious is all I know. I'm
link |
01:10:27.360
blanking. Anywho. All right. So planaria, why were we talking about the magical planaria that they
link |
01:10:33.280
have the mystery of life?
link |
01:10:34.320
Yeah. So the reason we're talking about planaria is not only are they immortal,
link |
01:10:37.680
not only do they regenerate every part of the body, they generally don't get cancer,
link |
01:10:43.920
which we can talk about why that's important. They're smart. They can learn things. You can
link |
01:10:47.360
train them. And it turns out that if you train a planaria and then cut their heads off, the tail
link |
01:10:52.320
will regenerate a brand new brain that still remembers the original information.
link |
01:10:56.000
Do they have a biological network going on or no?
link |
01:10:58.960
Yes.
link |
01:10:59.280
So their somatic cells are forming a network. And that's what you mean by a true brain? What's the
link |
01:11:05.200
requirement for a true brain?
link |
01:11:07.200
Like everything else, it's a continuum, but a true brain has certain characteristics as far as the
link |
01:11:12.080
density, like a localized density of neurons that guides behavior.
link |
01:11:15.680
In the head.
link |
01:11:16.240
Exactly. Exactly. If you cut their head off, the tail doesn't do anything. It just sits there
link |
01:11:22.080
until a new brain regenerates. They have all the same neurotransmitters that you and I have.
link |
01:11:28.000
But here's why we're talking about them in this context. So here's your planaria. You cut off the
link |
01:11:32.720
head. You cut off the tail. You have a middle fragment. That middle fragment has to make one
link |
01:11:35.840
head and one tail. How does it know how many of each to make? And where do they go? How come it
link |
01:11:40.080
doesn't switch? How come, right? So we did a very simple thing. And we said, okay, let's make the
link |
01:11:46.960
hypothesis that there's a somatic electrical network that remembers the correct pattern,
link |
01:11:52.400
and that what it's doing is recalling that memory and building to that pattern.
link |
01:11:55.760
So what we did was we used a way to visualize electrical activity in these cells, right? It's a
link |
01:12:01.920
variant of what people used to look for electricity in the brain. And we saw that that fragment has a
link |
01:12:08.080
very particular electrical pattern. You can literally see it once we developed the technique.
link |
01:12:12.720
It has a very particular electrical pattern that shows you where the head and the tail goes,
link |
01:12:17.920
right? You can just see it. And then we said, okay, well now let's test the idea that that's
link |
01:12:22.240
a memory that actually controls where the head and the tail goes. Let's change that pattern. So
link |
01:12:25.920
basically, incept the false memory. And so what you can do is you can do that in many different
link |
01:12:29.680
ways. One way is with drugs that target ion channels to say, and so you pick these drugs
link |
01:12:34.640
and you say, okay, I'm going to do it so that instead of this one head, one tail electrical
link |
01:12:39.600
pattern, you have a two headed pattern, right? You're just editing the electrical information
link |
01:12:43.440
in the network. When you do that, guess what the cells build? They build a two headed worm.
link |
01:12:47.520
And the coolest thing about it, no genetic changes. So we haven't touched the genome.
link |
01:12:51.040
The genome is totally wild type. But the amazing thing about it is that when you take these two
link |
01:12:54.320
headed animals and you cut them into pieces again, some of those pieces will continue to
link |
01:12:59.520
make two headed animals. So that information, that memory, that electrical circuit, not only does it
link |
01:13:05.840
hold the information for how many heads, not only does it use that information to tell the cells
link |
01:13:09.920
what to do to regenerate, but it stores it. Once you've reset it, it keeps. And we can go back,
link |
01:13:14.320
we can take a two headed animal and put it back to one headed. So now imagine, so there's a couple
link |
01:13:18.960
of interesting things here that that have implications for understanding what genomes
link |
01:13:22.720
and things like that. Imagine I take this two headed animal. Oh, and by the way, when they
link |
01:13:27.200
reproduce, when they tear themselves in half, you still get two headed animals. So imagine I take
link |
01:13:31.360
them and I throw them in the Charles River over here. So 100 years later, some scientists come
link |
01:13:34.640
along and they scoop up some samples and they go, oh, there's a single headed form and a two headed
link |
01:13:38.640
form. Wow, a speciation event. Cool. Let's sequence the genome and see why, what happened. The genomes
link |
01:13:43.600
are identical. There's nothing wrong with the genome. So if you ask the question, how does,
link |
01:13:47.040
so, so this goes back to your very first question is where do body plans come from, right? How does
link |
01:13:51.360
the planarian know how many heads it's supposed to have? Now it's interesting because you could
link |
01:13:55.600
say DNA, but what happened, what, what, as it turns out, the DNA produces a piece of hardware
link |
01:14:01.840
that by default says one head the way that when you turn on a calculator, by default, it's a zero
link |
01:14:07.520
every single time, right? When you turn it on, it just says zero, but it's a programmable calculator
link |
01:14:11.120
as it turns out. So once you've changed that next time, it won't say zero. It'll say something else
link |
01:14:16.000
and the same thing here. So you can make, you can make one headed, two headed, you can make no
link |
01:14:19.120
headed worms. We've done some other things along these lines, some other really weird constructs.
link |
01:14:24.000
So, so this, this, this, this question of, right. So again, it's really important. The, the hardware
link |
01:14:28.640
software distinction is really important because the hardware is essential because without proper
link |
01:14:33.920
hardware, you're never going to get to the right physiology of having that memory. But once you
link |
01:14:38.080
have it, it doesn't fully determine what the information is going to be. You can have other
link |
01:14:42.320
information in there and it's reprogrammable by us, by bacteria, by various parasites, probably
link |
01:14:47.360
things like that. The other amazing thing about these planarias, think about this, most animals,
link |
01:14:52.480
when we get a mutation in our bodies, our children don't inherit it, right? So you can go on, you
link |
01:14:56.480
could run around for 50, 60 years getting mutations. Your children don't have those mutations
link |
01:15:00.720
because we go through the egg stage. Planaria tear themselves in half and that's how they reproduce.
link |
01:15:05.120
So for 400 million years, they keep every mutation that they've had that doesn't kill the cell that
link |
01:15:10.080
it's in. So when you look at these planaria, their bodies are what's called mixoploid, meaning that
link |
01:15:14.640
every cell might have a different number of chromosomes. They look like a tumor. If you look
link |
01:15:17.840
at the, the, the, the, the genome is an incredible mess because they accumulate all this stuff.
link |
01:15:22.720
And yet the, their body structure is, they are the best regenerators on the planet. Their anatomy is
link |
01:15:28.240
rock solid, even though their genome is always all kinds of crap. So this is a kind of a scandal,
link |
01:15:32.720
right? That, you know, when we learn that, well, you know, what are genomes to what genomes determine
link |
01:15:37.520
your body? Okay. Why is the animal with the worst genome have the best anatomical control, the most
link |
01:15:41.520
cancer resistant, the most regenerative, right? Really, we're just beginning to start to understand
link |
01:15:46.080
this relationship between the, the genomically determined hardware and, and, and by the way,
link |
01:15:50.720
just as of, as of a couple of months ago, I think I now somewhat understand why this is,
link |
01:15:55.440
but it's really, it's really a major, you know, a major puzzle.
link |
01:15:57.840
I mean, that really throws a wrench into the whole nature versus nurture because you usually
link |
01:16:05.280
associate electricity with the, with the nurture and the hardware with the nature.
link |
01:16:13.360
And it's, there's just this weird integrated mess that propagates through generations.
link |
01:16:19.360
Yeah. It's much more fluid. It's much more complex. You can, you can imagine what's,
link |
01:16:25.040
what's happening here is just, just imagine the evolution of a, of a, of an animal like this,
link |
01:16:29.200
that, that multi scale, this goes back to this multi scale competency, right? Imagine that you
link |
01:16:33.280
have two, two, two, you have, you have an animal that that where its, its tissues have some degree
link |
01:16:38.800
of multi scale competency. So for example, if the like, like we saw in the tadpole, you know,
link |
01:16:42.960
if you put an eye on its tail, they can still see out of that eye, right? That the, you know,
link |
01:16:46.240
there's all, there's incredible plasticity. So if you have an animal and it comes up for selection
link |
01:16:50.400
and the fitness is quite good, evolution doesn't know whether the fitness is good because the
link |
01:16:56.880
genome was awesome or because the genome was kind of junky, but, but the competency made up for it,
link |
01:17:01.600
right? And things kind of ended up good. So what that means is that the more competency you have,
link |
01:17:06.160
the harder it is for selection to pick the best genomes, it hides information, right? And so that
link |
01:17:11.520
means that, so, so what happens, you know, evolution basically starts all those start,
link |
01:17:16.640
all the hard work is being done to increase the competency because it's harder and harder to see
link |
01:17:21.200
the genomes. And so I think in planaria, what happened is that there's this runaway phenomenon
link |
01:17:25.760
where all the effort went into the algorithm such that we know you got a crappy genome. We can't
link |
01:17:31.040
keep, we can't clean up the genome. We can't keep track of it. So what's going to happen is what
link |
01:17:35.360
survives are the algorithms that can create a great worm no matter what the genome is. So
link |
01:17:40.160
everything went into the algorithm and which, which of course then reduces the pressure on
link |
01:17:44.080
keeping a, you know, keeping a clean genome. So this idea of, right, and different animals have
link |
01:17:49.040
this in different, to different levels, but this idea of putting energy into an algorithm that
link |
01:17:54.720
does not overtrain on priors, right? It can't assume, I mean, I think biology is this way in
link |
01:17:59.040
general, evolution doesn't take the past too seriously because it makes these basically
link |
01:18:04.160
problem solving machines as opposed to like exactly what, you know, to, to, to deal with
link |
01:18:08.800
exactly what happened last time. Yeah. Problem solving versus memory recall. So a little memory,
link |
01:18:14.480
but a lot of problem solving. I think so. Yeah. In many cases, yeah. Problem solving.
link |
01:18:22.240
I mean, it's incredible that those kinds of systems are able to be constructed,
link |
01:18:25.600
um, especially how much they contrast with the way we build problem solving systems in the AI world.
link |
01:18:32.480
Um, back to Xenobots. I'm not sure if we ever described how Xenobots are built, but
link |
01:18:39.600
I mean, you have a paper titled biological robots perspectives on an emerging interdisciplinary
link |
01:18:45.280
field. And the beginning you, uh, you mentioned that the word Xenobots is like controversial.
link |
01:18:51.360
Do you guys get in trouble for using Xenobots or what? Do people not like the word Xenobots?
link |
01:18:57.280
Are you trying to be provocative with the word Xenobots versus biological robots?
link |
01:19:02.400
I don't know. Is there some drama that we should be aware of? There's a little bit of drama. Uh,
link |
01:19:07.200
I think, I think the drama is basically related to people, um, having very fixed ideas about what
link |
01:19:15.200
terms mean. And I think in many cases, these ideas are completely out of date with, with where science
link |
01:19:22.960
is now. And for sure they're, they're out of date with what's going to be, I mean, these, these,
link |
01:19:28.720
these concepts, uh, are not going to survive the next couple of decades. So if you ask a person
link |
01:19:33.760
and including, um, you know, a lot of people in biology who kind of want to keep a sharp
link |
01:19:38.080
distinction between biologicals and robots, right? See, what's a robot? Well, a robot,
link |
01:19:42.160
it comes out of a factory. It's made by humans. It is boring. It is a meaning that you can predict
link |
01:19:46.400
everything it's going to do. It's made of metal and certain other inorganic materials. Living
link |
01:19:50.640
organisms are magical. They, they, they arise, right? And so on. So these, these distinctions,
link |
01:19:54.400
I think these, these distinctions, I think were, were never good, but, uh, they're going to be
link |
01:20:00.720
completely useless going forward. And so part of, there's a couple of papers that that's one paper
link |
01:20:05.120
and there's another one that Josh Bongar and I wrote where we really attack the terminology.
link |
01:20:09.520
And we say these binary categories are based on very, um, nonessential kind of surface, uh,
link |
01:20:16.880
limitations of, of technology and imagination that were true before, but they've got to go. And so,
link |
01:20:22.560
and so we call them Zenobots. So, so Xeno for Xenopus Levus, where this is, it's the frog that,
link |
01:20:27.360
that these guys are made of, but we think it's an example of, of, of, uh, of a biobot technology,
link |
01:20:32.320
because ultimately if we, if we under, once we understand how to, uh, communicate and manipulate,
link |
01:20:39.680
um, the inputs to these cells, we will be able to get them to build whatever we want them to build.
link |
01:20:45.680
And that's robotics, right? It's, it's the rational construction of machines that have
link |
01:20:49.360
useful purposes. I, I absolutely think that this is a robotics platform, whereas some biologists
link |
01:20:54.560
don't, but it's built in a way that, uh, all the different components are doing their own computation.
link |
01:21:02.080
So in a way that we've been talking about, so you're trying to do top down control in that
link |
01:21:06.000
biological system. And in the future, all of this will, will, will merge together because
link |
01:21:09.680
of course at some point we're going to throw in synthetic biology circuits, right? New, new, um,
link |
01:21:13.840
you know, new transcriptional circuits to get them to do new things. Of course we'll throw some of
link |
01:21:17.200
that in, but we specifically stayed away from all of that because in the first few papers,
link |
01:21:21.680
and there's some more coming down the pike that are, I think going to be pretty, pretty dynamite,
link |
01:21:25.600
um, that, uh, we want to show what the native cells are made of. Because what happens is,
link |
01:21:30.720
you know, if you engineer the heck out of them, right, if we were to put in new, you know,
link |
01:21:33.920
new transcription factors and some new metabolic machinery and whatever, people will say, well,
link |
01:21:38.000
okay, you engineered this and you made it do whatever. And fine. I wanted to show, uh, and,
link |
01:21:44.000
and, and the whole team, uh, wanted to show the plasticity and the intelligence in the biology.
link |
01:21:50.240
What does it do that's surprising before you even start manipulating the hardware in that way?
link |
01:21:55.280
Yeah. Don't try to, uh, over control the thing. Let it flourish. The, the full beauty of the
link |
01:22:02.560
biological system. Why Xenopus Levus? How do you pronounce it? The frog.
link |
01:22:07.600
Xenopus Levus. Yeah. Yeah. It's a very popular.
link |
01:22:09.360
Why this frog?
link |
01:22:10.240
It's been used since, uh, I think the fifties. Uh, it's just very convenient because you can,
link |
01:22:15.360
you know, we, we keep the adults in this, in this, uh, very fine frog habitat. They lay eggs. They
link |
01:22:19.280
lay tens of thousands of eggs at a time. Um, the eggs develop right in front of your eyes. It's the
link |
01:22:24.880
most mad magical thing you can, you can see because normally, you know, if you were to deal
link |
01:22:29.440
with mice or rabbits or whatever, you don't see the early stages, right? Cause everything's inside
link |
01:22:32.960
the mother. Everything's in a Petri dish at room temperature. So you just, you, you have an egg,
link |
01:22:36.640
it's fertilized and you can just watch it divide and divide and divide. And on all the organs
link |
01:22:40.400
form, you can just see it. And at that point, um, the community has, has developed lots of
link |
01:22:44.960
different tools for understanding what's going on and also for, for manipulating, right? So it's,
link |
01:22:50.000
it's people use it for, um, you know, for understanding birth defects and neurobiology
link |
01:22:54.160
and cancer immunology. So you get the whole, uh, embryogenesis in the Petri dish.
link |
01:23:00.160
That's so cool to watch. Is there videos of this? Oh yeah. Yeah. Yeah. There's,
link |
01:23:03.520
but yeah, there's, there's amazing videos on, on, online. I mean, mammalian embryos are super cool
link |
01:23:08.160
too. For example, monozygotic twins are what happens when you cut a mammalian embryo in half.
link |
01:23:12.560
You don't get two half bodies. You get two perfectly normal bodies because it's a
link |
01:23:15.760
regeneration event, right? Development is just the, it's just the kind of regeneration really.
link |
01:23:19.920
And why this particular frog? It's just, uh, cause they were doing in the fifties and.
link |
01:23:25.760
It breeds well in, um, you know, in, in, it's easy to raise in, in the laboratory and, uh,
link |
01:23:32.000
it's very prolific and all the tools basically for decades, people have been developing tools.
link |
01:23:36.480
There's other, some people use other frogs, but I have to say this is, this is, this is important.
link |
01:23:40.800
Xenobots are fundamentally not anything about frogs. So, um, I can't say too much about this
link |
01:23:46.080
cause it's not published and peer reviewed yet, but we've made Xenobots out of other things that
link |
01:23:50.400
have nothing to do with frogs. It's, this is not a frog phenomenon. This is, we, we started with
link |
01:23:54.640
frog because it's so convenient, but this, this, this plasticity is not a fraud. You know, it's
link |
01:23:59.040
not related to the fact that they're frogs. What happens when you kiss it? Does it turn
link |
01:24:02.880
into a prince? No. Or a princess? Which way? Uh, prince. Yeah. Prince should be a prince.
link |
01:24:07.120
Yeah. Uh, that's an experiment that I don't believe we've done. And if we have, I don't
link |
01:24:10.720
want to collaborate, I can, I can take on the lead, uh, on that effort. Okay, cool. Uh,
link |
01:24:17.680
how does the cells coordinate? Let's focus in on just the embryogenesis. So there's one cell,
link |
01:24:24.320
so it divides, doesn't have to be very careful about what each cell starts doing once they divide.
link |
01:24:32.240
Yes. And like, when there's three of them, it's like the cofounders or whatever,
link |
01:24:37.840
like, well, like slow down, you're responsible for this. When do they become specialized and
link |
01:24:44.320
how do they coordinate that specialization? So, so this is the basic science of developmental
link |
01:24:49.440
biology. There's a lot known about all of that, but, um, but I'll tell you what I think is kind
link |
01:24:55.120
of the most important part, which is, yes, it's very important who does what. However,
link |
01:25:01.200
because going back to this issue of why I made this claim that, um, biology doesn't take the past
link |
01:25:07.440
too seriously. And what I mean by that is it doesn't assume that everything is the way it's,
link |
01:25:12.560
it's expected to be. Right. And here's an example of that. Um, this was, this was done, this was,
link |
01:25:17.200
this was an old experiment going back to the forties, but, um, basically imagine imagine
link |
01:25:21.280
it's a new salamander and it's got these little tube tubules that go to the kidneys, right? It's
link |
01:25:25.760
a little tube. Take a cross section of that tube. You see eight to 10 cells that have
link |
01:25:30.080
cooperated to make this little tube in cross section, right? So one amazing, one amazing
link |
01:25:34.880
thing you can do is, um, you can, you can mess with a very early cell division to make the cells
link |
01:25:41.200
gigantic, bigger. You can, you can make them different sizes. You can force them to be different
link |
01:25:44.560
sizes. So if you make the cells different sizes, the whole nude is still the same size.
link |
01:25:50.160
So if you take a cross section through the, through that tubule, instead of eight to 10
link |
01:25:53.840
cells, you might have four or five or you might have, you know, three until you make the cells so
link |
01:25:59.200
enormous that one single cell wraps around itself and, and gives you that same large scale structure
link |
01:26:06.480
with a completely different molecular mechanism. So now instead of cell to cell communication to
link |
01:26:11.120
make a tubule, instead of that, it's one cell using the cytoskeleton to bend itself around.
link |
01:26:15.840
So think about what that means in the service of a large scale, talk about top down control,
link |
01:26:20.400
right? In the service of a large scale anatomical feature, different molecular mechanisms get
link |
01:26:24.960
called up. So now think about this, you're, you're, you're a nude cell and trying to make an embryo.
link |
01:26:30.320
If you had a fixed idea of who was supposed to do what, you'd be screwed because now your cells
link |
01:26:34.480
are gigantic. Nothing would work. The, there's an incredible tolerance for changes in the size of
link |
01:26:40.240
the parts and the amount of DNA in those parts. Um, all sorts of stuff you can, you can, the life
link |
01:26:45.280
is highly interoperable. You can put electrodes in there and you can put weird nanomaterials. It
link |
01:26:49.200
still works. It's, it's, uh, this is that problem solving action, right? It's able to do what it
link |
01:26:54.400
needs to do, even when circumstances change. That is, you know, the hallmark of intelligence,
link |
01:27:00.160
right? William James defined intelligence as the ability to get to the same goal by different
link |
01:27:04.080
means. That's this, you get to the same goal by completely different means. And so, so,
link |
01:27:08.960
so why am I bringing this up is just to say that, yeah, it's important for the cells to do the right
link |
01:27:12.640
stuff, but they have incredible tolerances for things not being what you expect and to still
link |
01:27:17.520
get their job done. So if you're, you know, um, all of these things are not hardwired.
link |
01:27:23.840
There are organisms that might be hardwired. For example, the nematode C elegans in that organism,
link |
01:27:28.880
every cell is numbered, meaning that every C elegans has exactly the same number of cells
link |
01:27:32.800
as every other C elegans. They're all in the same place. They all divide. There's literally a map
link |
01:27:36.000
of how it works that in that, in that sort of system, it's, it's, it's much more cookie cutter,
link |
01:27:40.880
but, but most, most organisms are incredibly plastic in that way. Is there something particularly
link |
01:27:47.840
magical to you about the whole developmental biology process? Um, is there something you
link |
01:27:53.680
could say, cause you just said it, they're very good at accomplishing the goal of the job they
link |
01:27:58.080
need to do the competency thing, but you get fricking organism from one cell. It's like, uh,
link |
01:28:06.640
I mean, it's very hard, hard to intuit that whole process to even think about reverse engineering
link |
01:28:14.000
that process. Right. Very hard to the point where I often just imagine, I, I sometimes ask my
link |
01:28:19.760
students to do this thought experiment. Imagine you were, you were shrunk down to the, to the scale
link |
01:28:23.680
of a single cell and you were in the middle of an embryo and you were looking around at what's going
link |
01:28:27.120
on and the cells running around, some cells are dying at the, you know, every time you look,
link |
01:28:30.400
it's kind of a different number of cells for most organisms. And so I think that if you didn't know
link |
01:28:35.520
what embryonic development was, you would have no clue that what you're seeing is always going to
link |
01:28:40.080
make the same thing. Nevermind knowing what that, what that is. Nevermind being able to say, even
link |
01:28:44.560
with full genomic information, being able to say, what the hell are they building? We have no way
link |
01:28:48.080
to do that. But, but just even to guess that, wow, the, the, the outcome of all this activity is it's
link |
01:28:54.720
always going to be, it's always going to build the same thing. The imperative to create the final you
link |
01:29:00.080
as you are now is there already. So you can, you would, so you start from the same embryo,
link |
01:29:06.240
you create a very similar organism. Yeah. Except for cases like the Xenobots, when you give them
link |
01:29:14.480
a different environment, they come up with a different way to be adaptive in that environment.
link |
01:29:18.240
But overall, I mean, so, so I think, so I think to, you know, kind of summarize it,
link |
01:29:24.080
I think what evolution is really good at is creating hardware that has a very stable baseline
link |
01:29:31.520
mode, meaning that left to its own devices, it's very good at doing the same thing. But it has a
link |
01:29:36.880
bunch of problem solving capacity such that if any, if any assumptions don't hold, if your cells are
link |
01:29:41.360
a weird size, or you get the wrong number of cells, or there's a, you know, somebody stuck
link |
01:29:45.040
in electrode halfway through the body, whatever, it will still get most of what it needs to do done.
link |
01:29:52.400
You've talked about the magic and the power of biology here. If we look at the human brain,
link |
01:29:57.760
how special is the brain in this context? You're kind of minimizing the importance of the brain
link |
01:30:03.200
or lessening its, we think of all the special computation happens in the brain,
link |
01:30:08.640
everything else is like the help. You're kind of saying that the whole thing is the whole thing
link |
01:30:14.960
is doing computation. But nevertheless, how special is the human brain in this full context of
link |
01:30:22.160
biology? Yeah, I mean, look, there's no getting away from the fact that the human brain allows
link |
01:30:27.680
us to do things that we could not do without it. You can say the same thing about the liver.
link |
01:30:31.920
Yeah, no, this is this is true. And so and so, you know, I, my goal is not No, you're right. My goal
link |
01:30:37.680
is just being polite to the brain right now. Well, being a politician, like, listen,
link |
01:30:42.320
everybody has everybody has a role. Yeah, it's very important role. That's right. We have to
link |
01:30:46.480
acknowledge the importance of the brain, you know, there are more than enough people who are
link |
01:30:52.480
cheerleading the brain, right? So so I don't feel like nothing I say is going to reduce people's
link |
01:30:58.720
excitement about the human brain. And so so I emphasize other things credit. I don't think it
link |
01:31:04.160
gets too much credit. I think other things don't get enough credit. I think the brain is the human
link |
01:31:08.880
brain is incredible and special and all that. I think other things need more credit. And and I
link |
01:31:13.760
also think that this and I'm sort of this way about everything. I don't like binary categories,
link |
01:31:19.360
but almost anything I like a continuum. And the thing about the human brain is that it by by by
link |
01:31:24.880
accepting that as some kind of an important category or essential, essential thing, we end
link |
01:31:32.080
up with all kinds of weird pseudo problems and conundrum. So for example, when we talk about it,
link |
01:31:38.320
you know, if you don't want to talk about ethics and other other things like that, and what you
link |
01:31:44.880
know, this this idea that surely if we look out into the universe, surely, we don't believe that
link |
01:31:50.000
this human brain is the only way to be sentient, right? Surely we don't, you know, and to have high
link |
01:31:54.320
level cognition. I just can't even wrap my mind around this, this idea that that is the only way
link |
01:31:59.360
to do it. No doubt there are other architectures made bond made of completely different principles
link |
01:32:04.160
that achieve the same thing. And once we believe that, then that tells us something important. It
link |
01:32:09.760
tells us that things that are not quite human brains or chimeras of human brains and other
link |
01:32:15.520
tissue or human brains or other kinds of brains and novel configurations or things that are sort
link |
01:32:20.800
of brains, but not really, or plants or embryos or whatever, might also have important cognitive
link |
01:32:26.720
status. So that's the only thing I think we have to be really careful about treating the human
link |
01:32:32.000
brain as if it was some kind of like sharp binary category. You know, you are or you aren't. I don't
link |
01:32:37.680
believe that exists. So when we look out at all the beautiful variety of human brains,
link |
01:32:44.960
semi biological architectures out there in the universe, how many intelligent alien civilizations
link |
01:32:52.880
do you think are out there? Boy, I have no expertise in that whatsoever. You haven't met
link |
01:32:59.200
any? I have met the ones we've made. I think that I mean, exactly. In some sense with synthetic
link |
01:33:06.800
biology, are you not creating aliens? I absolutely think so because look, all of life,
link |
01:33:12.800
all of all standard model systems are an end of one course of evolution on Earth, right? And trying
link |
01:33:19.840
to make conclusions about biology from looking at life on Earth is like testing your theory on the
link |
01:33:26.880
same data that generated it. It's all it's all kind of like locked in. So we absolutely have to
link |
01:33:32.720
create novel examples that have no history on Earth that don't, you know, xenobots have no
link |
01:33:40.240
history of selection to be a good xenobot. The cells have selection for various things, but the
link |
01:33:44.560
xenobot itself never existed before. And so we can make chimeras, you know, we make frog a lottles
link |
01:33:48.960
that are sort of half frog, half axolotl. You can make all sorts of high brats, right constructions
link |
01:33:53.520
of living tissue with robots and whatever. We need to be making these things until we find actual
link |
01:33:58.640
aliens, because otherwise, we're just looking at an end of one set of examples, all kinds of frozen
link |
01:34:03.920
accidents of evolution and so on. We need to go beyond that to really understand biology. But
link |
01:34:08.720
we're still even when you do a synthetic biology, you're locked in to the basic components of the
link |
01:34:17.040
way biology is done on this Earth. Yeah, right. And also, and the and also the basic constraints
link |
01:34:23.760
of the environment, even artificial environments to construct in the lab are tied up to the
link |
01:34:27.840
environment. I mean, what do you? Okay, let's say there is I mean, what I think is there's
link |
01:34:34.240
a nearly infinite number of intelligent civilizations living or dead out there.
link |
01:34:41.920
If you pick one out of the box, what do you think it would look like? So in when you think about
link |
01:34:50.320
synthetic biology, or creating synthetic organisms, how hard is it to create something that's very
link |
01:34:58.880
different? Yeah, I think it's very hard to create something that's very different, right? It's we
link |
01:35:06.320
are just locked in both both both experimentally and in terms of our imagination, right? It's very
link |
01:35:12.400
hard. And you also emphasize several times that the idea of shape. Yeah, the individual cell get
link |
01:35:18.000
together with other cells and they kind of they're gonna build a shape. So it's shape and function,
link |
01:35:23.920
but shape is a critical thing. Yeah. So here, I'll take a stab. I mean, I agree with you. I did
link |
01:35:29.200
to whatever extent that we can say anything, I do think that there's, you know, probably an
link |
01:35:33.600
infinite number of, of different different architectures with with that are with interesting
link |
01:35:38.800
cognitive properties out there. What can we say about them? I think that the only things that are
link |
01:35:45.840
going I don't I don't think we can rely on any of the typical stuff, you know, carbon based, none of
link |
01:35:50.880
that. Like, I think all of that is just, you know, us being having having a lack of imagination. But
link |
01:35:56.960
I think the things that are going to be universal, if anything is, are things, for example, driven by
link |
01:36:03.760
resource limitation, the fact that you are fighting a hostile world, and you have to draw a
link |
01:36:09.280
boundary between yourself and the world somewhere, the fact that that boundary is not given to you
link |
01:36:13.040
by anybody, you have to you have to assume it, you know, estimated yourself. And the fact that
link |
01:36:18.000
you have to course grain your experience and the fact that you're going to try to minimize surprise
link |
01:36:22.160
and the fact that like these, these are the things that I think are fundamental about biology,
link |
01:36:25.920
none of the, you know, the facts about the genetic code, or even the fact that we have genes or the
link |
01:36:30.160
biochemistry of it, I don't think any of those things are fundamental. But it's going to be a
link |
01:36:34.160
lot more about the information and about the creation of the self, the fact that so in my in
link |
01:36:38.640
my framework, selves are demarcated by the scale of the goals that they can pursue. So from little
link |
01:36:44.560
tiny local goals to like massive, you know, planetary scale goals for certain humans,
link |
01:36:49.440
and everything and everything in between. So you can draw this like cognitive light cone about
link |
01:36:53.280
that determines the the scale of the goals you could possibly pursue. I think those kinds of
link |
01:36:58.960
frameworks, like that, like active inference, and so on are going to be universally applicable,
link |
01:37:04.080
but but none of the other things that are that are typically discussed. Quick pause,
link |
01:37:08.640
do you need a bathroom break? We were just talking about, you know, aliens and all that. That's a
link |
01:37:16.320
funny thing, which is, I don't know if you've seen them, there's a kind of debate that goes on about
link |
01:37:20.720
cognition and plants, and what can you say about different kinds of computation and cognition and
link |
01:37:24.560
plants. And I always I always look at that something like if you're weirded out by cognition
link |
01:37:28.800
and plants, you're not ready for exobiology, right? If you know something that's that similar
link |
01:37:34.560
here on Earth is already like freaking you out, then I think there's going to be all kinds of
link |
01:37:38.960
cognitive life out there that we're gonna have a really hard time recognizing. I think robots will
link |
01:37:44.080
help us, yeah, like expand our mind about cognition, either that or the word like xenobots. So,
link |
01:37:54.640
and they maybe becomes the same thing is, you know, really, when the human engineers,
link |
01:38:01.920
the thing, at least in part, and then is able to achieve some kind of cognition that's different
link |
01:38:08.400
than what you're used to, then you start to understand like, oh, you know, every living
link |
01:38:14.320
organism is capable of cognition. Oh, I need to kind of broaden my understanding what cognition
link |
01:38:19.680
is. But do you think plants, like when you when you eat them, are they screaming? I don't know
link |
01:38:25.520
about screaming. I think you have to see what I think when I eat a salad. Yeah, good. Yeah,
link |
01:38:30.080
I think you have to scale down the expectations in terms of right, so probably they're not
link |
01:38:34.560
screaming in the way that we would be screaming. However, there's plenty of data on plants being
link |
01:38:39.760
able to do anticipation and certain kinds of memory and so on. I think, you know, what you
link |
01:38:46.720
just said about robots, I hope you're right. And I hope that's but there's two, there's two ways
link |
01:38:51.440
that people can take that right. So one way is exactly what you just said to try to kind of
link |
01:38:54.720
expand their expand their their their notions for that category. The other way people often go is
link |
01:39:02.000
they just sort of define the term is if if if it's not a natural product, it's it's just faking,
link |
01:39:08.240
right? It's not really intelligence if it was made by somebody else, because it's that same,
link |
01:39:11.920
it's the same thing. They can see how it's done. And once you see how it's like a magic trick,
link |
01:39:16.160
when you see how it's done, it's not as fun anymore. And and I think people have a real
link |
01:39:21.360
tendency for that. And they sort of which which I find really strange in the sense that if somebody
link |
01:39:25.280
said to me, we have this this this sort of blind, like, like, hill climbing search,
link |
01:39:32.480
and then and then we have a really smart team of engineers, which one do you think is going to
link |
01:39:36.800
produce a system that has good intelligence? I think it's really weird to say that it only
link |
01:39:41.680
comes from the blind search, right? It can't be done by people who, by the way, can also use
link |
01:39:45.600
evolutionary techniques if they want to, but also rational design. I think it's really weird to say
link |
01:39:49.920
that real intelligence only comes from natural evolution. So I hope you're right. I hope people
link |
01:39:55.600
take it the other the other way. But there's a nice shortcut. So I work with Lego robots a lot now
link |
01:40:01.360
from for my own personal pleasure. Not in that way internet. So four legs. And one of the things
link |
01:40:13.520
that changes my experience with the robots a lot is when I can't understand why I did a certain
link |
01:40:21.440
thing. And there's a lot of ways to engineer that. Me, the person that created the software that runs
link |
01:40:27.680
it. There's a lot of ways for me to build that software in such a way that I don't exactly know
link |
01:40:33.120
why it did a certain basic decision. Of course, as an engineer, you can go in and start to look at
link |
01:40:40.160
logs. You can log all kind of data, sensory data, the decisions you made, you know, all the outputs
link |
01:40:45.840
in your networks and so on. But I also try to really experience that surprise and that really
link |
01:40:52.320
experience as another person would that totally doesn't know how it's built. And I think the magic
link |
01:40:57.840
is there in not knowing how it works. That I think biology does that for you through the layers of
link |
01:41:06.960
abstraction. Yeah, it because nobody really knows what's going on inside the biological. Like each
link |
01:41:14.320
one component is clueless about the big picture. I think there's actually really cheap systems that
link |
01:41:20.480
can that can illustrate that kind of thing, which is even like, you know, fractals, right? Like,
link |
01:41:27.200
you have a very small, short formula in Z, and you see it and there's no magic, you're just going to
link |
01:41:32.560
crank through, you know, Z squared plus C, whatever, you're just going to crank through it. But the
link |
01:41:36.960
result of it is this incredibly rich, beautiful image, right? That that just like, wow, all of
link |
01:41:43.280
that was in this, like, 10 character long string, like amazing. So the fact that you can you can
link |
01:41:49.760
know everything there is to know about the details and the process and all the parts and every like,
link |
01:41:54.800
there's literally no magic of any kind there. And yet the outcome is something that you would never
link |
01:42:01.120
have expected. And it's just it just, you know, is incredibly rich and complex and beautiful. So
link |
01:42:07.200
there's a lot of that. You write that you work on developing conceptual frameworks for understanding
link |
01:42:13.360
unconventional cognition. So the kind of thing we've been talking about, I just like the term
link |
01:42:17.840
unconventional cognition. And you want to figure out how to detect, study and communicate with
link |
01:42:23.440
the thing. You've already mentioned a few examples, but what is unconventional cognition? Is it as
link |
01:42:29.200
simply as everything else outside of what we define usually as cognition, cognitive science,
link |
01:42:34.880
the stuff going on between our ears? Or is there some deeper way to get at the fundamentals of
link |
01:42:41.440
what is cognition? Yeah, I think like, and I'm certainly not the only person who works in
link |
01:42:47.440
unconventional, unconventional cognition. So it's the term used? Yeah, that's one that I so I've
link |
01:42:53.440
coined a number of weird terms, but that's not one of mine like that. That's an existing thing. So
link |
01:42:56.960
so for example, somebody like Andy Adam Askey, who I don't know if you've if you've had him on,
link |
01:43:00.880
if you haven't, you should he's a he's a he's a, you know, very interesting guy. He's a computer
link |
01:43:05.600
scientist, and he does unconventional cognition and slime molds, all kinds of weird. He's a real
link |
01:43:10.640
weird, weird cat, really interesting. Anyway, so so that's, you know, it's a bunch of terms that
link |
01:43:15.280
I've come up with. But that's not one of mine. So I think like many terms, that one is, is really
link |
01:43:21.600
defined by the times, meaning that unconventional cognitive things that are unconventional cognition
link |
01:43:26.560
today are not going to be considered unconventional cognition at some point. It's one of those,
link |
01:43:31.920
it's one of those things. And so it's, you know, it's, it's, it's this, it's this really deep
link |
01:43:37.840
question of how do you recognize, communicate with, classify cognition, when you cannot rely
link |
01:43:46.240
on the typical milestones, right? So typical, you know, again, if you stick with the with the, the
link |
01:43:52.160
history of life on Earth, like these, these exact model systems, you would say, Ah, here's a particular
link |
01:43:56.640
structure of the brain. And this one has fewer of those. And this one has a bigger frontal cortex.
link |
01:44:00.160
And this one, right, so these are these are landmarks that that we're that we're used to,
link |
01:44:04.640
and and allows us to make very kind of rapid judgments about things. But if you can't rely on
link |
01:44:10.560
that, either because you're looking at a synthetic thing, or an engineered thing, or an alien thing,
link |
01:44:16.160
then what do you do? Right? How do you and so and so that's what I'm really interested. I'm
link |
01:44:19.600
interested in mind in all of its possible implementations, not just the obvious ones
link |
01:44:25.040
that we know from from looking at brains here on Earth. Whenever I think about something like
link |
01:44:31.040
unconventional cognition, I think about cellular automata, I'm just captivated by the beauty of the
link |
01:44:36.880
thing. The fact that from simple little objects, you can create some such beautiful complexity
link |
01:44:46.480
that very quickly, you forget about the individual objects, and you see the things that it creates
link |
01:44:53.120
as its own organisms. That blows my mind every time. Like, honestly, I could full time just
link |
01:45:01.920
eat mushrooms and watch cellular automata. Don't even have to do mushrooms.
link |
01:45:06.880
Just cellular automata. It feels like, I mean, from the engineering perspective, I love
link |
01:45:13.280
when a very simple system captures something really powerful, because then you can study
link |
01:45:18.320
that system to understand something fundamental about complexity about life on Earth.
link |
01:45:24.080
Anyway, how do I communicate with a thing? If cellular automata can do cognition, if a plant
link |
01:45:32.000
can do cognition, if a xenobot can do cognition, how do I like whisper in its ear and get an
link |
01:45:40.000
answer back to how do I have a conversation? How do I have a xenobot on a podcast?
link |
01:45:46.880
It's a really interesting line of investigation that opens up. I mean, we've thought about this.
link |
01:45:53.840
So you need a few things. You need to understand the space in which they live. So not just the
link |
01:46:00.400
physical modality, like can they see light, can they feel vibration? I mean, that's important,
link |
01:46:03.680
of course, because that's how you deliver your message. But not just the ideas for a communication
link |
01:46:08.320
medium, not just the physical medium, but saliency, right? So what's important to this
link |
01:46:16.000
system? And systems have all kinds of different levels of sophistication of what you could expect
link |
01:46:22.080
to get back. And I think what's really important, I call this the spectrum of persuadability,
link |
01:46:28.080
which is this idea that when you're looking at a system, you can't assume where on the spectrum
link |
01:46:33.200
it is. You have to do experiments. And so for example, if you look at a gene regulatory network,
link |
01:46:41.440
which is just a bunch of nodes that turn each other on and off at various rates, you might
link |
01:46:45.760
look at that and you say, well, there's no magic here. I mean, clearly this thing is as deterministic
link |
01:46:50.320
as it gets. It's a piece of hardware. The only way we're going to be able to control it is by
link |
01:46:54.320
rewiring it, which is the way molecular biology works, right? We can add nodes, remove nodes,
link |
01:46:57.920
whatever. Well, so we've done simulations and shown that biological, and now we're doing this in the
link |
01:47:03.440
lab, the biological networks like that have associative memory. So they can actually learn,
link |
01:47:08.960
they can learn from experience. They have habituation, they have sensitization, they
link |
01:47:12.000
have associative memory, which you wouldn't have known if you assume that they have to be on the
link |
01:47:15.840
left side of that spectrum. So when you're going to communicate with something, and we've even,
link |
01:47:21.280
Charles Abramson and I have written a paper on behaviorist approaches to synthetic organisms,
link |
01:47:26.080
meaning that if you're given something, you have no idea what it is or what it can do,
link |
01:47:29.600
how do you figure out what its psychology is, what its level is, what does it, and so we literally
link |
01:47:34.480
lay out a set of protocols, starting with the simplest things and then moving up to more complex
link |
01:47:38.480
things where you can make no assumptions about what this thing can do, right? You have to start
link |
01:47:42.640
and you'll find out. So when you're going to, so here's a simple, I mean, here's one way to
link |
01:47:47.120
communicate with something. If you can train it, that's a way of communicating. So if you can
link |
01:47:51.600
provide, if you can figure out what the currency of reward of positive and negative reinforcement is,
link |
01:47:56.160
right, and you can get it to do something it wasn't doing before based on experiences you've
link |
01:48:01.520
given, you have taught it one thing. You have communicated one thing, that such and such an
link |
01:48:06.080
action is good, some other action is not good. That's like a basic atom of a primitive atom
link |
01:48:11.520
of communication. What about in some sense, if it gets you to do something you haven't done before,
link |
01:48:19.040
is it answering back? Yeah, most certainly. And there's, I've seen cartoons, I think maybe Gary
link |
01:48:24.560
Larson or somebody had had a cartoon of these rats in the maze and the one rat, you know,
link |
01:48:29.040
assist to the other. You look at this every time, every time I walk over here, he starts scribbling
link |
01:48:32.720
in that on the, you know, on the clipboard that he has, it's awesome. If we step outside ourselves
link |
01:48:38.720
and really measure how much, like if I actually measure how much I've changed because of my
link |
01:48:46.400
interaction with certain cellular automata. I mean, you really have to take that into
link |
01:48:52.400
consideration about like, well, these things are changing you too. Yes. I know, you know how it
link |
01:48:58.320
works and so on, but you're being changed by the thing. Yeah, absolutely. I think I read,
link |
01:49:04.080
I don't know any details, but I think I read something about how wheat and other things
link |
01:49:08.640
have domesticated humans in terms of, right, but by their properties change the way that
link |
01:49:13.520
the human behavior and societal structures. In that sense, cats are running the world
link |
01:49:20.240
because they've took over the, so first off, so first they, while not giving a shit about humans,
link |
01:49:27.200
clearly with every ounce of their being, they've somehow got just millions and millions of humans
link |
01:49:35.680
to take them home and feed them. And then not only the physical space did they take over,
link |
01:49:43.280
they took over the digital space. They dominate the internet in terms of cuteness, in terms of
link |
01:49:48.640
memeability. And so they're like, they got themselves literally inside the memes, they
link |
01:49:55.760
become viral and spread on the internet. And they're the ones that are probably controlling
link |
01:50:01.040
humans. That's my theory. Another, that's a follow up paper after the frog kissing. Okay.
link |
01:50:06.000
I mean, you mentioned sentience and consciousness. You have a paper titled Generalizing Frameworks
link |
01:50:18.000
for Sentience Beyond Natural Species. So beyond normal cognition, if we look at sentience and
link |
01:50:30.320
consciousness, and I wonder if you draw an interesting distinction between those two
link |
01:50:34.000
elsewhere, outside of humans, and maybe outside of Earth, you think aliens have sentience. And
link |
01:50:45.120
if they do, how do we think about it? So when you have this framework, what is this paper? What is
link |
01:50:50.880
the way you propose to think about sentience? Yeah, that particular paper was a very short
link |
01:50:57.040
commentary on another paper that was written about crabs. It was a really good paper on them,
link |
01:51:01.280
crabs and various, like a rubric of different types of behaviors that could be applied to
link |
01:51:07.760
different creatures, and they're trying to apply it to crabs and so on. Consciousness,
link |
01:51:13.440
we can talk about if you want, but it's a whole separate kettle of fish. I almost never talk about
link |
01:51:18.400
crabs. In this case, yes. I almost never talk about consciousness, per se. I've said very,
link |
01:51:24.240
very little about it, but we can talk about it if you want. Mostly what I talk about is cognition,
link |
01:51:29.120
because I think that that's much easier to deal with in a kind of rigorous experimental way.
link |
01:51:36.240
I think that all of these terms have, you know, sentience and so on, have different definitions,
link |
01:51:45.040
and I fundamentally, I think that people can, as long as they specify what they mean ahead of time,
link |
01:51:53.520
I think people can define them in various ways. The only thing that I really think
link |
01:51:58.480
that I really kind of insist on is that the right way to think about all this stuff is
link |
01:52:06.800
from an engineering perspective. What does it help me to control, predict, and does it help
link |
01:52:12.640
me do my next experiment? That's not a universal perspective. Some people have philosophical
link |
01:52:20.720
kind of underpinnings, and those are primary, and if anything runs against that, then it must
link |
01:52:25.600
automatically be wrong. Some people will say, I don't care what else. If your theory says to me
link |
01:52:31.440
that thermostats have little tiny goals, I'm not, so that's it. That's my philosophical
link |
01:52:38.560
preconception. Thermostats do not have goals, and that's it. That's one way of doing it,
link |
01:52:43.200
and some people do it that way. I do not do it that way, and I think that we can't,
link |
01:52:47.440
I don't think we can know much of anything from a philosophical armchair. I think that
link |
01:52:51.440
all of these theories and ways of doing things stand or fall based on just basically one set
link |
01:52:57.280
of criteria. Does it help you run a rich research program? That's it.
link |
01:53:01.040
I agree with you totally, but forget philosophy. What about the poetry of ambiguity? What about
link |
01:53:08.240
at the limits of the things you can engineer using terms that can be defined in multiple ways
link |
01:53:14.800
and living within that uncertainty in order to play with words until something lands that you
link |
01:53:22.720
can engineer? I mean, that's to me where consciousness sits currently. Nobody really
link |
01:53:27.600
understands the heart problem of consciousness, the subject, what it feels like, because it really
link |
01:53:33.360
feels like, it feels like something to be this biological system. This conglomerate of a bunch
link |
01:53:39.040
of cells in this hierarchy of competencies feels like something, and yeah, I feel like one thing,
link |
01:53:45.360
and is that just a side effect of a complex system, or is there something more that humans have,
link |
01:53:58.720
or is there something more that any biological system has? Some kind of magic, some kind of,
link |
01:54:03.680
not just a sense of agency, but a real sense with a capital letter S of agency.
link |
01:54:10.560
Yeah.
link |
01:54:12.080
Ah, boy, yeah, that's a deep question.
link |
01:54:13.760
Is there room for poetry in engineering or no?
link |
01:54:16.640
No, there definitely is, and a lot of the poetry comes in when we realize that none of the
link |
01:54:22.240
categories we deal with are sharp as we think they are, right? And so in the different areas of all
link |
01:54:29.680
these spectra are where a lot of the poetry sits, I have many new theories about things,
link |
01:54:34.160
but I, in fact, do not have a good theory about consciousness that I plan to trot out.
link |
01:54:38.400
And you almost don't see it as useful for your current work to think about consciousness?
link |
01:54:42.800
I think it will come. I have some thoughts about it, but I don't feel like they're going to move
link |
01:54:46.160
the needle yet on that.
link |
01:54:47.520
And you want to ground it in engineering always.
link |
01:54:50.720
So, well, I mean, so if we really tackle consciousness per se, in the terms of the
link |
01:54:58.240
hard problem, that isn't necessarily going to be groundable in engineering, right? That
link |
01:55:04.160
aspect of cognition is, but actual consciousness per se, first person perspective, I'm not sure
link |
01:55:10.400
that that's groundable in engineering. And I think specifically what's different about it is
link |
01:55:16.480
there's a couple of things. So let's, you know, here we go. I'll say a couple of things about
link |
01:55:20.800
consciousness. One thing is that what makes it different is that for every other thing,
link |
01:55:28.000
other aspect of science, when we think about having a correct or a good theory of it,
link |
01:55:35.200
we have some idea of what format that theory makes predictions in. So whether those be numbers
link |
01:55:41.360
or whatever, we have some idea. We may not know the answer, we may not have the theory,
link |
01:55:45.200
but we know that when we get the theory, here's what it's going to output, and then we'll know
link |
01:55:49.120
if it's right or wrong. For actual consciousness, not behavior, not neural correlates, but actual
link |
01:55:54.320
first person consciousness. If we had a correct theory of consciousness, or even a good one,
link |
01:55:59.840
what the hell would, what format would it make predictions in, right? Because all the things
link |
01:56:05.440
that we know about basically boil down to observable behaviors. So the only thing I can
link |
01:56:10.640
think of when I think about that is, it'll be poetry, or it'll be something to, if I ask you,
link |
01:56:19.920
okay, you've got a great theory of consciousness, and here's this creature, maybe it's a natural one,
link |
01:56:23.920
maybe it's an engineered one, whatever. And I want you to tell me what your theory says about this
link |
01:56:30.000
being, what it's like to be this being. The only thing I can imagine you giving me is some piece
link |
01:56:36.640
of art, a poem or something, that once I've taken it in, I share, I now have a similar state as
link |
01:56:45.600
whatever. That's about as good as I can come up with. Well, it's possible that once you have a
link |
01:56:51.360
good understanding of consciousness, it would be mapped to some things that are more measurable.
link |
01:56:56.240
So for example, it's possible that a conscious being is one that's able to suffer. So you start
link |
01:57:07.440
to look at pain and suffering. You can start to connect it closer to things that you can measure
link |
01:57:16.400
that, in terms of how they reflect themselves in behavior and problem solving and creation and
link |
01:57:25.760
attainment of goals, for example, which I think suffering is one of the, you know, life is suffering.
link |
01:57:31.520
It's one of the big aspects of the human condition. And so if consciousness is somehow a,
link |
01:57:40.720
maybe at least a catalyst for suffering, you could start to get like echoes of it. You start to see
link |
01:57:48.080
like the actual effects of consciousness and behavior. That it's not just about subjective
link |
01:57:52.880
experience. It's like it's really deeply integrated in the problem solving decision making of a
link |
01:57:59.120
system, something like this. But also it's possible that we realize, this is not a philosophical
link |
01:58:06.000
statement. Philosophers can write their books. I welcome it. You know, I take the Turing test
link |
01:58:13.360
really seriously. I don't know why people really don't like it. When a robot convinces you that
link |
01:58:20.800
it's intelligent, I think that's a really incredible accomplishment. And there's some deep
link |
01:58:26.080
sense in which that is intelligence. If it looks like it's intelligent, it is intelligent. And I
link |
01:58:32.560
think there's some deep aspect of a system that appears to be conscious. In some deep sense,
link |
01:58:43.600
it is conscious. At least for me, we have to consider that possibility. And a system that
link |
01:58:51.520
appears to be conscious is an engineering challenge. Yeah, I don't disagree with any of
link |
01:58:58.480
that. I mean, especially intelligence, I think, is a publicly observable thing. Science fiction
link |
01:59:06.080
has dealt with this for a century or much more, maybe. This idea that when you are confronted with
link |
01:59:12.400
something that just doesn't meet any of your typical assumptions, so you can't look in the
link |
01:59:17.760
skull and say, oh, well, there's that frontal cortex, so then I guess we're good. So this thing
link |
01:59:23.280
lands on your front lawn, and the little door opens, and something trundles out, and it's shiny
link |
01:59:30.160
and aluminum looking, and it hands you this poem that it wrote while it was flying over,
link |
01:59:35.520
and how happy it is to meet you. What's going to be your criteria for whether you get to take it
link |
01:59:40.960
apart and see what makes it tick, or whether you have to be nice to it and whatever? All the
link |
01:59:46.000
criteria that we have now and that people are using, and as you said, a lot of people are
link |
01:59:51.280
down on the Turing test and things like this, but what else have we got? Because measuring
link |
01:59:55.920
the cortex size isn't going to cut it in the broader scheme of things. So I think it's a
link |
02:00:03.280
wide open problem. Our solution to the problem of other minds, it's very simplistic. We give each
link |
02:00:11.360
other credit for having minds just because we're sort of on an anatomical level, we're pretty
link |
02:00:15.840
similar, and so it's good enough. But how far is that going to go? So I think that's really primitive.
link |
02:00:21.360
So yeah, I think it's a major unsolved problem. It's a really challenging direction of thought
link |
02:00:28.960
to the human race that you talked about, like embodied minds. If you start to think that other
link |
02:00:36.640
things other than humans have minds, that's really challenging. Because all men are created equal
link |
02:00:43.360
starts being like, all right, well, we should probably treat not just cows with respect,
link |
02:00:52.960
but like plants, and not just plants, but some kind of organized conglomerates of cells
link |
02:01:02.400
in a petri dish. In fact, some of the work we're doing, like you're doing and the whole community
link |
02:01:08.960
of science is doing with biology, people might be like, we were really mean to viruses.
link |
02:01:13.760
Yeah. I mean, yeah, the thing is, you're right. And I certainly get phone calls about people
link |
02:01:20.320
complaining about frog skin and so on. But I think we have to separate the sort of deep
link |
02:01:26.560
philosophical aspects versus what actually happens. So what actually happens on Earth
link |
02:01:30.560
is that people with exactly the same anatomical structure kill each other on a daily basis.
link |
02:01:37.280
So I think it's clear that simply knowing that something else is equally or maybe more
link |
02:01:44.880
cognitive or conscious than you are is not a guarantee of kind behavior, that much we know of.
link |
02:01:51.120
And so then we look at a commercial farming of mammals and various other things. And so I think
link |
02:01:56.880
on a practical basis, long before we get to worrying about things like frog skin,
link |
02:02:03.120
we have to ask ourselves, why are we, what can we do about the way that we've been behaving
link |
02:02:08.400
towards creatures, which we know for a fact, because of our similarities are basically just
link |
02:02:13.280
like us. That's kind of a whole other social thing. But fundamentally, of course, you're
link |
02:02:18.880
absolutely right in that we are also, think about this, we are on this planet in some way,
link |
02:02:24.720
incredibly lucky. It's just dumb luck that we really only have one dumb animal.
link |
02:02:31.360
We only have one dominant species. It didn't have to work out that way. So you could easily
link |
02:02:37.200
imagine that there could be a planet somewhere with more than one equally or maybe near equally
link |
02:02:43.360
intelligent species. But they may not look anything like each other. So there may be
link |
02:02:49.200
multiple ecosystems where there are things of similar to human like intelligence. And then
link |
02:02:54.960
you'd have all kinds of issues about how do you relate to them when they're physically
link |
02:02:59.840
like you at all. But yet in terms of behavior and culture and whatever, it's pretty obvious
link |
02:03:04.960
that they've got as much on the ball as you have. Or maybe imagine that there was another
link |
02:03:10.400
group of beings that was on average 40 IQ points lower. We're pretty lucky in many ways. We don't
link |
02:03:18.320
really have, even though we still act badly in many ways. But the fact is, all humans are more
link |
02:03:24.400
or less in that same range, but didn't have to work out that way. Well, but I think that's part
link |
02:03:30.160
of the way life works on Earth, maybe human civilization works, is it seems like we want
link |
02:03:38.800
ourselves to be quite similar. And then within that, you know, where everybody's about the same
link |
02:03:45.280
relatively IQ, intelligence, problem solving capabilities, even physical characteristics.
link |
02:03:49.840
But then we'll find some aspect of that that's different. And that seems to be like,
link |
02:03:58.560
I mean, it's really dark to say, but that seems to be the, not even a bug, but like a feature
link |
02:04:07.440
of the early development of human civilization. You pick the other, your tribe versus the other
link |
02:04:14.960
tribe and you war, it's a kind of evolution in the space of memes, a space of ideas, I think,
link |
02:04:22.640
and you war with each other. So we're very good at finding the other, even when the characteristics
link |
02:04:28.240
are really the same. And that's, I don't know what that, I mean, I'm sure so many of these things
link |
02:04:35.040
echo in the biological world in some way. Yeah. There's a fun experiment that I did. My son
link |
02:04:41.600
actually came up with this and we did a biology unit together. He's a homeschool. And so we did
link |
02:04:46.880
this a couple of years ago. We did this thing where, imagine you get this slime mold, right?
link |
02:04:50.800
Fisarum polycephalum, and it grows on a Petri dish of agar and it sort of spreads out and it's a
link |
02:04:57.600
single cell protist, but it's like this giant thing. And so you put down a piece of oat and
link |
02:05:02.160
it wants to go get the oat and it sort of grows towards the oat. So what you do is you take a
link |
02:05:05.760
razor blade and you just separate the piece of the whole culture that's growing towards the
link |
02:05:10.160
oat. You just kind of separate it. And so now think about the interesting decision making
link |
02:05:15.040
calculus for that little piece. I can go get the oat and therefore I won't have to share those
link |
02:05:20.960
nutrients with this giant mass over there. So the nutrients per unit volume is going to be amazing.
link |
02:05:25.280
I should go eat the oat. But if I first rejoin, because Fisarum, once you cut it, has the ability
link |
02:05:30.560
to join back up. If I first rejoin, then that whole calculus becomes impossible because there
link |
02:05:36.240
is no more me anymore. There's just we and then we will go eat this thing, right? So this
link |
02:05:40.960
interesting, you can imagine a kind of game theory where the number of agents isn't fixed
link |
02:05:46.320
and that it's not just cooperate or defect, but it's actually merge and whatever, right?
link |
02:05:50.320
Yeah. So that computation, how does it do that decision making?
link |
02:05:54.400
Yeah. So it's really interesting. And so empirically, what we found is that it tends
link |
02:06:00.240
to merge first. It tends to merge first and then the whole thing goes. But it's really interesting
link |
02:06:04.720
that that calculus, I mean, I'm not an expert in the economic game theory and all that,
link |
02:06:09.600
but maybe there's some sort of hyperbolic discounting or something. But maybe this idea
link |
02:06:14.880
that the actions you take not only change your payoff, but they change who or what you are,
link |
02:06:22.720
and that you could take an action after which you don't exist anymore, or you are radically
link |
02:06:27.440
changed, or you are merged with somebody else. As far as I know, that's a whole different
link |
02:06:33.280
thing. As far as I know, we're still missing a formalism for even knowing how to model
link |
02:06:38.720
any of that.
link |
02:06:39.720
Do you see evolution, by the way, as a process that applies here on Earth? Where did evolution
link |
02:06:45.200
come from?
link |
02:06:46.200
Yeah.
link |
02:06:47.200
So this thing from the very origin of life that took us to today, what the heck is that?
link |
02:06:54.560
I think evolution is inevitable in the sense that if you combine, and basically, I think
link |
02:07:00.960
one of the most useful things that was done in early computing, I guess in the 60s, it
link |
02:07:05.600
started with evolutionary computation and just showing how simple it is that if you have
link |
02:07:13.320
imperfect heredity and competition together, those two things, or three things, so heredity,
link |
02:07:19.280
imperfect heredity, and competition, or selection, those three things, and that's it. Now you're
link |
02:07:25.000
off to the races. And so that can be, it's not just on Earth because it can be done in
link |
02:07:29.640
the computer, it can be done in chemical systems, it can be done in, you know, Lee Smolin says
link |
02:07:33.480
it works on cosmic scales. So I think that that kind of thing is incredibly pervasive
link |
02:07:42.400
and general. It's a general feature of life. It's interesting to think about, you know,
link |
02:07:49.200
the standard thought about this is that it's blind, right? Meaning that the intelligence
link |
02:07:55.280
of the process is zero, it's stumbling around. And I think that back in the day, when the
link |
02:08:01.520
options were it's dumb like machines, or it's smart like humans, then of course, the scientists
link |
02:08:07.560
went in this direction, because nobody wanted creationism. They said, okay, it's got to
link |
02:08:10.680
be like completely blind. I'm not actually sure, right? Because I think that everything
link |
02:08:15.920
is a continuum. And I think that it doesn't have to be smart with foresight like us, but
link |
02:08:20.880
it doesn't have to be completely blind either. I think there may be aspects of it. And in
link |
02:08:25.720
particular, this kind of multi scale competency might give it a little bit of look ahead maybe
link |
02:08:30.760
or a little bit of problem solving sort of baked in. But that's going to be completely
link |
02:08:36.700
different in different systems. I do think it's general. I don't think it's just on Earth.
link |
02:08:41.640
I think it's a very fundamental thing.
link |
02:08:44.040
And it does seem to have a kind of direction that it's taking us that's somehow perhaps
link |
02:08:50.120
is defined by the environment itself. It feels like we're headed towards something. Like,
link |
02:08:57.360
we're playing out a script that was just like a single cell defines the entire organism.
link |
02:09:03.060
It feels like from the origin of Earth itself, it's playing out a kind of script. You can't
link |
02:09:10.480
really go any other way.
link |
02:09:12.480
I mean, so this is very controversial, and I don't know the answer. But people have argued
link |
02:09:17.280
that this is called, you know, sort of rewinding the tape of life, right? And some people have
link |
02:09:22.720
argued, I think, I think Conway Morris, maybe has argued that it is that there's a deep
link |
02:09:28.440
attractor, for example, to human to the human kind of structure and that and that if you
link |
02:09:34.640
were to rewind it again, you'd basically get more or less the same thing. And then other
link |
02:09:37.560
people have argued that, no, it's incredibly sensitive to frozen accidents. And then once
link |
02:09:41.920
certain stochastic decisions are made downstream, everything is going to be different. I don't
link |
02:09:46.200
know. I don't know. You know, we're very bad at predicting attractors in the space of complex
link |
02:09:52.760
systems, generally speaking, right? We don't know. So maybe, so maybe evolution on Earth
link |
02:09:56.880
has these deep attractors that no matter what has happened, it pretty much would likely
link |
02:10:01.360
to end up there or maybe not. I don't know.
link |
02:10:03.640
What's a really difficult idea to imagine that if you ran Earth a million times, 500,000
link |
02:10:10.880
times you would get Hitler? Like, yeah, we don't like to think like that. We think like,
link |
02:10:17.160
because at least maybe in America, you'd like to think that individual decisions can change
link |
02:10:23.480
the world. And if individual decisions could change the world, then surely any perturbation
link |
02:10:30.760
could result in a totally different trajectory. But maybe there's a, in this competency hierarchy,
link |
02:10:38.560
it's a self correcting system. There's just ultimately, there's a bunch of chaos that
link |
02:10:43.320
ultimately is leading towards something like a super intelligent, artificial intelligence
link |
02:10:47.200
system that answers 42. I mean, there might be a kind of imperative for life that it's
link |
02:10:56.800
headed to. And we're too focused on our day to day life of getting coffee and snacks and
link |
02:11:04.360
having sex and getting a promotion at work, not to see the big imperative of life on Earth
link |
02:11:12.840
that is headed towards something.
link |
02:11:14.560
Yeah, maybe, maybe. It's difficult. I think one of the things that's important about Chimerica
link |
02:11:24.640
bioengineering technologies, all of those things are that we have to start developing
link |
02:11:29.520
a better science of predicting the cognitive goals of composite systems. So we're just
link |
02:11:35.240
not very good at it, right? We don't know if I create a composite system, and this could
link |
02:11:41.320
be Internet of Things or swarm robotics or a cellular swarm or whatever. What is the
link |
02:11:48.240
emergent intelligence of this thing? First of all, what level is it going to be at? And
link |
02:11:51.640
if it has goal directed capacity, what are the goals going to be? Like, we are just not
link |
02:11:56.240
very good at predicting that yet. And I think that it's an existential level need for us
link |
02:12:06.420
to be able to because we're building these things all the time, right? We're building
link |
02:12:10.520
both physical structures like swarm robotics, and we're building social financial structures
link |
02:12:16.060
and so on, with very little ability to predict what sort of autonomous goals that system
link |
02:12:21.640
is going to have, of which we are now cogs. And so learning to predict and control those
link |
02:12:26.780
things is going to be critical. So in fact, if you're right and there is some kind of
link |
02:12:31.400
attractor to evolution, it would be nice to know what that is and then to make a rational
link |
02:12:36.680
decision of whether we're going to go along or we're going to pop out of it or try to
link |
02:12:39.800
pop out of it because there's no guarantee. I mean, that's the other kind of important
link |
02:12:44.120
thing. A lot of people, I get a lot of complaints from people who email me and say, you know,
link |
02:12:49.760
what you're doing, it isn't natural. And I'll say, look, natural, that'd be nice if somebody
link |
02:12:56.240
was making sure that natural was matched up to our values, but no one's doing that. Evolution
link |
02:13:02.520
optimizes for biomass. That's it. Nobody's optimizing. It's not optimizing for your happiness.
link |
02:13:07.160
I don't think necessarily it's optimizing for intelligence or fairness or any of that
link |
02:13:11.600
stuff.
link |
02:13:12.600
I'm going to find that person that emailed you, beat them up, take their place, steal
link |
02:13:18.720
everything they own and say, no, this is natural.
link |
02:13:22.040
This is natural. Yeah, exactly. Because it comes from an old worldview where you could
link |
02:13:28.200
assume that whatever is natural, that that's probably for the best. And I think we're long
link |
02:13:32.040
out of that garden of Eden kind of view. So I think we can do better. I think we, and
link |
02:13:37.000
we have to, right? Natural just isn't great for a lot of life forms.
link |
02:13:42.020
What are some cool synthetic organisms that you think about, you dream about? When you
link |
02:13:46.520
think about embodied mind, what do you imagine? What do you hope to build?
link |
02:13:51.400
Yeah, on a practical level, what I really hope to do is to gain enough of an understanding
link |
02:13:57.700
of the embodied intelligence of the organs and tissues such that we can achieve a radically
link |
02:14:04.680
different regenerative medicine so that we can say, basically, and I think about it as,
link |
02:14:11.080
you know, in terms of like, okay, can you, what's the goal kind of end game for this
link |
02:14:18.200
whole thing? To me, the end game is something that you would call an anatomical compiler.
link |
02:14:22.480
So the idea is you would sit down in front of the computer and you would draw the body
link |
02:14:27.440
or the organ that you wanted. Not molecular details, but like, yeah, this is what I want.
link |
02:14:31.880
I want a six legged, you know, frog with a propeller on top, or I want a heart that looks
link |
02:14:36.200
like this, or I want a leg that looks like this. And what it would do if we knew what
link |
02:14:39.800
we were doing is put out, convert that anatomical description into a set of stimuli that would
link |
02:14:47.000
have to be given to cells to convince them to build exactly that thing, right? I probably
link |
02:14:51.320
won't live to see it, but I think it's achievable. And I think with that, if we can have that,
link |
02:14:56.840
then that is basically the solution to all of medicine except for infectious disease.
link |
02:15:03.140
So birth defects, right? Traumatic injury, cancer, aging, degenerative disease. If we
link |
02:15:07.620
knew how to tell cells what to build, all of those things go away. So those things go
link |
02:15:11.440
away. And the positive feedback spiral of economic costs, where all of the advances
link |
02:15:18.520
are increasingly more heroic and expensive interventions of a sinking ship when you're
link |
02:15:22.880
like 90 and so on, right? All of that goes away because basically, instead of trying
link |
02:15:26.980
to fix you up as you degrade, you progressively regenerate, you apply the regenerative medicine
link |
02:15:33.800
early before things degrade. So I think that that'll have massive economic impacts over
link |
02:15:38.920
what we're trying to do now, which is not at all sustainable. And that's what I hope.
link |
02:15:43.800
I hope that we get it. So to me, yes, the xenobots will be doing useful things, cleaning
link |
02:15:50.080
up the environment, cleaning out your joints and all that kind of stuff. But more important
link |
02:15:55.480
than that, I think we can use these synthetic systems to try to develop a science of detecting
link |
02:16:04.920
and manipulating the goals of collective intelligences of cells specifically for regenerative medicine.
link |
02:16:10.840
And then sort of beyond that, if we think further beyond that, what I hope is that kind
link |
02:16:15.840
of like what you said, all of this drives a reconsideration of how we formulate ethical
link |
02:16:22.480
norms because this old school, so in the olden days, what you could do is if you were confronted
link |
02:16:29.080
with something, you could tap on it, right? And if you heard a metallic clanging sound,
link |
02:16:33.200
you'd say, ah, fine, right? So you could conclude it was made in a factory. I can take it apart.
link |
02:16:37.160
I can do whatever, right? If you did that and you got sort of a squishy kind of warm
link |
02:16:40.960
sensation, you'd say, ah, I need to be more or less nice to it and whatever. That's not
link |
02:16:46.080
going to be feasible. It was never really feasible, but it was good enough because we
link |
02:16:49.360
didn't have any, we didn't know any better. That needs to go. And I think that by breaking
link |
02:16:55.940
down those artificial barriers, someday we can try to build a system of ethical norms
link |
02:17:03.200
that does not rely on these completely contingent facts of our earthly history, but on something
link |
02:17:08.740
much, much deeper that really takes agency and the capacity to suffer and all that takes
link |
02:17:15.520
that seriously.
link |
02:17:16.520
The capacity to suffer and the deep questions I would ask of a system is can I eat it and
link |
02:17:21.560
can I have sex with it? Which is the two fundamental tests of, again, the human condition. So I
link |
02:17:30.560
can basically do what Dali does that's in the physical space. So print out like a 3D
link |
02:17:39.480
print Pepe the Frog with a propeller head, propeller hat is the dream.
link |
02:17:46.320
Well yes and no. I mean, I want to get away from the 3D printing thing because that will
link |
02:17:50.840
be available for some things much earlier. I mean, we can already do bladders and ears
link |
02:17:55.560
and things like that because it's micro level control, right? When you 3D print, you are
link |
02:17:59.920
in charge of where every cell goes. And for some things that, you know, for, for like
link |
02:18:02.960
this thing, they had that I think 20 years ago or maybe earlier than that, you could
link |
02:18:06.040
do that.
link |
02:18:07.040
So yeah, I would like to emphasize the Dali part where you provide a few words and it
link |
02:18:11.480
generates a painting. So here you say, I want a frog with these features and then it would
link |
02:18:19.920
go direct a complex biological system to construct something like that.
link |
02:18:25.000
Yeah. The main magic would be, I mean, I think from, from looking at Dali and so on, it looks
link |
02:18:30.040
like the first part is kind of solved now where you go from, from the words to the image,
link |
02:18:34.360
like that seems more or less solved. The next step is really hard. This is what keeps things
link |
02:18:39.920
like CRISPR and genomic editing and so on. That's what limits all the impacts for regenerative
link |
02:18:46.880
medicine because going back to, okay, this is the knee joint that I want, or this is
link |
02:18:51.320
the eye that I want. Now, what genes do I edit to make that happen, right? Going back
link |
02:18:56.000
in that direction is really hard. So instead of that, it's going to be, okay, I understand
link |
02:18:59.840
how to motivate cells to build particular structures. Can I rewrite the memory of what
link |
02:19:03.680
they think they're supposed to be building such that then I can, you know, take my hands
link |
02:19:07.480
off the wheel and let them, let them do their thing.
link |
02:19:09.960
So some of that is experiment, but some of that may be AI can help too. Just like with
link |
02:19:13.960
protein folding, this is exactly the problem that protein folding in the most simple medium
link |
02:19:23.400
tried and has solved with alpha fold, which is how does the sequence of letters result
link |
02:19:31.800
in this three dimensional shape? And you have to, I guess it didn't solve it because you
link |
02:19:37.160
have to, if you say, I want this shape, how do I then have a sequence of letters? Yeah.
link |
02:19:43.760
The reverse engineering step is really tricky.
link |
02:19:45.920
It is. I think, I think we're, we're, and we're doing some of this now is, is to use
link |
02:19:51.680
AI to try and build actionable models of the intelligence of the cellular collectives.
link |
02:19:57.800
So try to help us and help us gain models that, that, that, and, and we've had some
link |
02:20:02.400
success in this. So we, we did something like this for, for, you know, for repairing birth
link |
02:20:08.480
defects of the brain in frog. We've done some of this for normalizing melanoma where you
link |
02:20:14.240
can really start to use AI to make models of how would I impact this thing if I wanted
link |
02:20:20.140
to given all the complexities, right. And, and, and given all the, the, the, the controls
link |
02:20:25.600
that it, that it knows how to do.
link |
02:20:27.520
So when you say regenerative medicine, so we talked about creating biological organisms,
link |
02:20:34.060
but if you regrow a hand, that information is already there, right? The biological system
link |
02:20:41.440
has that information. So how does regenerative medicine work today? How do you hope it works?
link |
02:20:48.080
What's the hope there?
link |
02:20:49.080
Yeah.
link |
02:20:50.080
Yeah. How do you make it happen?
link |
02:20:52.480
Well today there's a set of popular approaches. So, so one is 3d printing. So the idea is
link |
02:20:57.480
I'm going to make a scaffold of the thing that I want. I'm going to seed it with cells
link |
02:21:00.600
and then, and then there it is, right? So kind of direct, and then that works for certain
link |
02:21:03.760
things. You can make a bladder that way or an ear, something like that. The other, the
link |
02:21:08.920
other ideas is some sort of stem cell transplant. These are the ideas. If we, if we put in stem
link |
02:21:14.300
cells with appropriate factors, we can get them to generate certain kinds of neurons
link |
02:21:17.920
for certain diseases and so on. All of those things are good for relatively simple structures,
link |
02:21:24.760
but when you want an eye or a hand or something else, I think in this maybe an unpopular opinion,
link |
02:21:30.660
I think the only hope we have in any reasonable kind of timeframe is to understand how the
link |
02:21:36.560
thing was motivated to get made in the first place. So what is it that, that made those
link |
02:21:41.320
cells in the, in the beginning, create a particular arm with a particular set of sizes and shapes
link |
02:21:48.400
and number of fingers and all that. And why is it that a salamander can keep losing theirs
link |
02:21:51.760
and keep regrowing theirs and a planarian can do the same even more? So to me, uh, kind
link |
02:21:57.640
of ultimate regenerate medicine was when you can tell the cells to build whatever it is
link |
02:22:02.840
you need them to build. Right. And so the, so that we can all be like planaria basically,
link |
02:22:07.400
do you have to start at the very beginning or can you, um, do a shortcut? Cause we're
link |
02:22:13.680
going to hand, you already got the whole organism. Yeah. So here's what we've done, right? So,
link |
02:22:19.560
we've, we've more or less solved that in frogs. So frogs, unlike salamanders do not regenerate
link |
02:22:24.160
their legs as adults. And so, so, uh, we've shown that with a very, um, uh, kind of simple
link |
02:22:31.800
intervention. So what we do is there's two things you need to, uh, you need to have a
link |
02:22:36.100
signal that tells the cells what to do, and then you need some way of delivering it. And
link |
02:22:39.520
so this is work together with, um, with David Kaplan and I should do a, um, a disclosure
link |
02:22:44.080
here. We have a company called morphosuticals and spin off where we're trying to, uh, to
link |
02:22:48.200
address, uh, uh, regenerate, you know, limb regeneration. So we've solved it in the frog
link |
02:22:52.320
and we're now in trials and mice. So now we're going to, we're in mammals now. It's, I can't
link |
02:22:56.440
say anything about how it's going, but the frog thing is solved. So what you do is, um,
link |
02:22:59.720
after you have a little frog, Lou Skywalker with every growing hand. Yeah, basically,
link |
02:23:04.480
basically. Yeah. Yeah. So what you do is we did, we did with legs instead of forearms.
link |
02:23:07.840
And what you do is, um, after amputation, normally they, they don't regenerate. You
link |
02:23:11.200
put on a wearable bioreactor. So it's this thing that, um, that goes on and, uh, Dave
link |
02:23:15.620
Kaplan does lab makes these things and inside it's a, it's a very controlled environment.
link |
02:23:21.300
It is a silk gel that carries, uh, some drugs, for example, ion channel drugs. And what you're
link |
02:23:26.360
doing is you're saying to the cells, you should regrow what normally goes here. So, uh, that
link |
02:23:33.720
whole thing is on for 24 hours and you take it off and you don't touch the leg. Again,
link |
02:23:37.760
this is really important because what we're not looking for is a set of micromanagement,
link |
02:23:41.280
uh, you know, printing or controlling the cells we want to trigger. We want to, we want
link |
02:23:45.600
to interact with it early on and then not touch it again because, because we don't know
link |
02:23:49.260
how to make a frog leg, but the frog knows how to make a frog leg. So 24 hours, 18 months
link |
02:23:54.820
of leg growth after that, without us touching it again. And after 18 months, you get a pretty
link |
02:23:58.480
good leg that kind of shows this proof of concept that early on when the cells right
link |
02:24:02.720
after injury, when they're first making a decision about what they're going to do, you
link |
02:24:05.560
can, you can impact them. And once they've decided to make a leg, they don't need you
link |
02:24:09.440
after that. They can do their own thing. So that's an approach that we're now taking.
link |
02:24:14.040
What about cancer suppression? That's something you mentioned earlier. How can all of these
link |
02:24:18.480
ideas help with cancer suppression?
link |
02:24:20.360
So let's, let's go back to the beginning and ask what, what, what, what cancer is. So I
link |
02:24:23.600
think, um, you know, asking why there's cancer is the wrong question. I think the right question
link |
02:24:28.520
is why is there ever anything but cancer? So, so in the normal state, you have a bunch
link |
02:24:33.420
of cells that are all cooperating towards a large scale goal. If that process of cooperation
link |
02:24:38.680
breaks down and you've got a cell that is isolated from that electrical network that
link |
02:24:42.780
lets you remember what the big goal is, you revert back to your unicellular lifestyle
link |
02:24:47.280
as far as, now think about that border between self and world, right? Normally when all these
link |
02:24:51.020
cells are connected by gap junctions into an electrical network, they are all one self,
link |
02:24:56.360
right? That meaning that, um, their goals, they have these large tissue level goals and
link |
02:25:01.600
so on. As soon as a cell is disconnected from that, the self is tiny, right? And so at that
link |
02:25:06.760
point, and so, so people, a lot of people model cancer cell cells as being more selfish
link |
02:25:11.580
and all that. They're not more selfish. They're equally selfish. It's just that their self
link |
02:25:14.280
is smaller. Normally the self is huge. Now they got tiny little selves. Now what are
link |
02:25:18.040
the goals of tiny little selves? Well, proliferate, right? And migrate to wherever life is good.
link |
02:25:22.680
And that's metastasis. That's proliferation and metastasis. So, so one thing we found
link |
02:25:26.640
and people have noticed years ago that when cells convert to cancer, the first thing they
link |
02:25:31.960
see is they close the gap junctions. And it's a lot like, I think it's a lot like that experiment
link |
02:25:36.800
with the slime mold where until you close that gap junction, you can't even entertain
link |
02:25:41.440
the idea of leaving the collective because there is no you at that point, right? Your
link |
02:25:44.520
mind melded with this, with this whole other network. But as soon as the gap junction is
link |
02:25:48.600
closed, now the boundary between you and now, now the rest of the body is just outside environment
link |
02:25:53.600
to you. You're just a, you're just a unicellular organism and the rest of the body's environment.
link |
02:25:58.520
So, so we, so we studied this process and we worked out a way to artificially control
link |
02:26:04.840
the bioelectric state of these cells to physically force them to remain in that network. And
link |
02:26:10.120
so then, then what that, what that means is that nasty mutations like KRAS and things
link |
02:26:15.580
like that, these really tough oncogenic mutations that cause tumors. If you, if you do them
link |
02:26:20.920
and then, but then within artificially control of the bioelectrics, you greatly reduce tumor
link |
02:26:29.120
genesis or, or normalize cells that had already begun to convert. You basically, they go back
link |
02:26:33.840
to being normal cells. And so this is another, much like with the planaria, this is another
link |
02:26:38.080
way in which the bioelectric state kind of dominates what the, what the genetic state
link |
02:26:43.400
is. So if you sequence the, you know, if you sequence the nucleic acid, you'll see the
link |
02:26:47.200
KRAS mutation, you'll say, ah, well that's going to be a tumor, but there isn't a tumor
link |
02:26:50.800
because, because bioelectrically you've kept the cells connected and they're just working
link |
02:26:54.200
on making nice skin and kidneys and whatever else. So, so we've started moving that to,
link |
02:26:59.760
to, to human glioblastoma cells and we're hoping for, you know, a patient in the future
link |
02:27:04.760
interaction with patients.
link |
02:27:07.560
So is this one of the possible ways in which we may quote cure cancer?
link |
02:27:12.820
I think so. Yeah, I think so. I think, I think the actual cure, I mean, there are other technology,
link |
02:27:17.160
you know, immune therapy, I think is a great technology. Chemotherapy, I don't think is
link |
02:27:21.920
a good, is a good technology. I think we've got to get, get off of that.
link |
02:27:25.680
So chemotherapy just kills cells.
link |
02:27:27.720
Yeah. Well, chemotherapy hopes to kill more of the tumor cells than of your cells. That's
link |
02:27:32.920
it. It's a fine balance. The problem is the cells are very similar because they are your
link |
02:27:36.440
cells. And so if you don't have a very tight way of distinguishing between them, then the
link |
02:27:43.480
toll that chemo takes on the rest of the body is just unbelievable.
link |
02:27:46.240
And immunotherapy tries to get the immune system to do some of the work.
link |
02:27:49.760
Exactly. Yeah. I think that's potentially a very good, a very good approach. If, if
link |
02:27:54.720
the immune system can be taught to recognize enough of, of the cancer cells, that that's
link |
02:27:59.520
a pretty good approach. But I, but I think, but I think our approach is in a way more
link |
02:28:02.720
fundamental because if you can, if you can keep the cells harnessed towards organ level
link |
02:28:08.440
goals as opposed to individual cell goals, then nobody will be making a tumor or metastasizing
link |
02:28:13.900
and so on.
link |
02:28:15.440
So we've been living through a pandemic. What do you think about viruses in this full beautiful
link |
02:28:21.840
biological context we've been talking about? Are they beautiful to you? Are they terrifying?
link |
02:28:30.080
Also maybe let's say, are they, since we've been discriminating this whole conversation,
link |
02:28:36.800
are they living? Are they embodied minds? Embodied minds that are assholes?
link |
02:28:43.840
As far as I know, and I haven't been able to find this paper again, but, but somewhere
link |
02:28:47.200
I saw in the last couple of months, there was some, there was some papers showing an
link |
02:28:51.680
example of a virus that actually had physiology. So there was some, something was going on,
link |
02:28:55.360
I think proton flux or something on the virus itself. But, but barring that, generally speaking,
link |
02:29:01.320
viruses are very passive. They don't do anything by themselves. And so I don't see any particular
link |
02:29:06.860
reason to attribute much of a mind to them. I think, you know, they represent a way to
link |
02:29:14.100
hijack other minds for sure, like, like cells and other things.
link |
02:29:18.520
But that's an interesting interplay though. If they're hijacking other minds, you know,
link |
02:29:24.300
the way we're, we were talking about living organisms that they can interact with each
link |
02:29:28.420
other and have it alter each other's trajectory by having interacted. I mean, that's, that's
link |
02:29:36.400
a deep, meaningful connection between a virus and a cell. And I think both are transformed
link |
02:29:45.680
by the experience. And so in that sense, both are living.
link |
02:29:49.040
Yeah. Yeah. You know, the whole category, I, this question of what's living and what's
link |
02:29:56.320
not living, I really, I'm not sure. And I know there's people that work on this and
link |
02:30:00.000
I don't want to piss anybody off, but, but I have not found that particularly useful
link |
02:30:05.480
as, as to try and make that a binary kind of a distinction. I think level of cognition
link |
02:30:11.480
is very interesting of, but as a, as a continuum, but, but living and nonliving, I, you know,
link |
02:30:17.080
I don't, I really know what to do with that. I don't, I don't know what you do next after,
link |
02:30:20.680
after making that distinction.
link |
02:30:21.800
That's why I make the very binary distinction. Can I have sex with it or not? Can I eat it
link |
02:30:27.640
or not? Those, cause there's, those are actionable, right?
link |
02:30:30.360
Yeah. Well, I think that's a critical point that you brought up because how you relate
link |
02:30:34.000
to something is really what this is all about, right? As an engineer, how do I control it?
link |
02:30:40.000
But maybe I shouldn't be controlling it. Maybe I should be, you know, can I have a relationship
link |
02:30:44.120
with it? Should I be listening to its advice? Like, like all the way from, you know, I need
link |
02:30:48.400
to take it apart all the way to, I better do what it says cause it seems to be pretty
link |
02:30:52.800
smart and everything in between, right? That's really what we're asking about.
link |
02:30:56.480
Yeah. We need to understand our relationship to it. We're searching for that relationship,
link |
02:31:01.400
even in the most trivial senses. You came up with a lot of interesting terms. We've mentioned
link |
02:31:08.200
some of them. Agential material. That's a really interesting one. That's a really interesting
link |
02:31:14.560
one for the future of computation and artificial intelligence and computer science and all
link |
02:31:19.600
of that. There's also, let me go through some of them. If they spark some interesting thought
link |
02:31:25.940
for you, there's teleophobia, the unwarranted fear of erring on the side of too much agency
link |
02:31:32.640
when considering a new system.
link |
02:31:35.000
Yeah.
link |
02:31:36.000
That's the opposite. I mean, being afraid of maybe anthropomorphizing the thing.
link |
02:31:41.080
This'll get some people ticked off, I think. But I don't think, I think the whole notion
link |
02:31:47.120
of anthropomorphizing is a holdover from a pre scientific age where humans were magic
link |
02:31:54.440
and everything else wasn't magic and you were anthropomorphizing when you dared suggest
link |
02:32:00.080
that something else has some features of humans. And I think we need to be way beyond that.
link |
02:32:05.760
And this issue of anthropomorphizing, I think it's a cheap charge. I don't think it holds
link |
02:32:12.640
any water at all other than when somebody makes a cognitive claim. I think all cognitive
link |
02:32:18.240
claims are engineering claims, really. So when somebody says this thing knows or this
link |
02:32:22.620
thing hopes or this thing wants or this thing predicts, all you can say is fabulous. Give
link |
02:32:27.800
me the engineering protocol that you've derived using that hypothesis and we will see if this
link |
02:32:33.420
thing helps us or not. And then, and then we can, you know, then we can make a rational
link |
02:32:36.760
decision.
link |
02:32:37.760
I also like anatomical compiler, a future system representing the longterm end game
link |
02:32:43.400
of the science of morphogenesis that reminds us how far away from true understanding we
link |
02:32:49.280
are. Someday you will be able to sit in front of an anatomical computer, specify the shape
link |
02:32:54.740
of the animal or a plant that you want, and it will convert that shape specification to
link |
02:32:59.480
a set of stimuli that will have to be given to cells to build exactly that shape. No matter
link |
02:33:05.160
how weird it ends up being, you have total control. Just imagine the possibility for
link |
02:33:12.560
memes in the physical space. One of the glorious accomplishments of human civilizations is
link |
02:33:18.780
memes in digital space. Now this could create memes in physical space. I am both excited
link |
02:33:25.840
and terrified by that possibility. Cognitive light cone, I think we also talked about the
link |
02:33:31.800
outer boundary in space and time of the largest goal a given system can work towards. Is this
link |
02:33:39.220
kind of like shaping the set of options?
link |
02:33:42.500
It's a little different than options. It's really focused on... I first came up with
link |
02:33:49.680
this back in 2018, I want to say. There was a conference, a Templeton conference where
link |
02:33:55.320
they challenged us to come up with frameworks. I think actually it's the diverse intelligence
link |
02:34:01.160
community.
link |
02:34:02.160
Summer Institute.
link |
02:34:03.160
Yeah, they had a Summer Institute.
link |
02:34:04.160
That's the logos, the bee with some circuits.
link |
02:34:06.640
Yeah, it's got different life forms. The whole program is called diverse intelligence. They
link |
02:34:13.360
challenged us to come up with a framework that was suitable for analyzing different
link |
02:34:18.240
kinds of intelligence together. Because the kinds of things you do to a human are not
link |
02:34:23.000
good with an octopus, not good with a plant and so on. I started thinking about this.
link |
02:34:29.560
I asked myself what do all cognitive agents, no matter what their provenance, no matter
link |
02:34:35.560
what their architecture is, what do cognitive agents have in common? It seems to me that
link |
02:34:41.560
what they have in common is some degree of competency to pursue a goal. What you can
link |
02:34:46.480
do then is you can draw. What I ended up drawing was this thing that it's kind of like a backwards
link |
02:34:51.720
Minkowski cone diagram where all of space is collapsed into one axis and then here and
link |
02:34:58.520
then time is this axis. Then what you can do is you can draw for any creature, you can
link |
02:35:04.160
semi quantitatively estimate what are the spatial and temporal goals that it's capable
link |
02:35:12.360
of pursuing.
link |
02:35:13.360
For example, if you are a tick and all you really are able to pursue is maximum or a
link |
02:35:20.240
bacterium and maximizing the level of some chemical in your vicinity, that's all you've
link |
02:35:24.800
got, it's a tiny little icon, then you're a simple system like a tick or a bacterium.
link |
02:35:29.440
If you are something like a dog, well, you've got some ability to care about some spatial
link |
02:35:37.520
region, some temporal. You can remember a little bit backwards, you can predict a little
link |
02:35:41.680
bit forwards, but you're never ever going to care about what happens in the next town
link |
02:35:46.280
over four weeks from now. As far as we know, it's just impossible for that kind of architecture.
link |
02:35:51.680
If you're a human, you might be working towards world peace long after you're dead. You might
link |
02:35:56.580
have a planetary scale goal that's enormous. Then there may be other greater intelligences
link |
02:36:04.120
somewhere that can care in the linear range about numbers of creatures, some sort of Buddha
link |
02:36:08.800
like character that can care about everybody's welfare, really care the way that we can't.
link |
02:36:16.040
It's not a mapping of what you can sense, how far you can sense. It's not a mapping
link |
02:36:20.640
of how far you can act. It's a mapping of how big are the goals you are capable of envisioning
link |
02:36:25.720
and working towards. I think that enables you to put synthetic kinds of constructs,
link |
02:36:33.880
AIs, aliens, swarms, whatever on the same diagram because we're not talking about what
link |
02:36:40.120
you're made of or how you got here. We're talking about what are the size and complexity
link |
02:36:44.720
of the goals towards which you can work.
link |
02:36:46.760
Is there any other terms that pop into mind that are interesting?
link |
02:36:50.760
I'm trying to remember. I have a list of them somewhere on my website.
link |
02:36:54.200
Human morphology, yeah, definitely check it out. Morphosutical, I like that one. Ionisutical.
link |
02:37:01.840
Yeah. Those refer to different types of interventions in the regenerative medicine space. Amorphosutical
link |
02:37:08.600
is something that it's a kind of intervention that really targets the cells decision making
link |
02:37:16.200
process about what they're going to build. Ionisuticals are like that, but more focused
link |
02:37:20.640
specifically on the bioelectrics. There's also, of course, biochemical, biomechanical,
link |
02:37:24.200
who knows what else, maybe optical kinds of signaling systems there as well.
link |
02:37:29.160
Target morphology is interesting. It's designed to capture this idea that it's not just feedforward
link |
02:37:37.920
emergence and oftentimes in biology, I mean, of course that happens too, but in many cases
link |
02:37:41.980
in biology, the system is specifically working towards a target in anatomical morphospace.
link |
02:37:48.440
It's a navigation task really. These kinds of problem solving can be formalized as navigation
link |
02:37:57.200
tasks and that they're really going towards a particular region. How do you know? Because
link |
02:38:00.920
you deviate them and then they go back.
link |
02:38:03.720
Let me ask you, because you've really challenged a lot of ideas in biology in the work you
link |
02:38:12.160
do, probably because some of your rebelliousness comes from the fact that you came from a different
link |
02:38:18.160
field of computer engineering, but could you give advice to young people today in high
link |
02:38:23.800
school or college that are trying to pave their life story, whether it's in science
link |
02:38:31.600
or elsewhere, how they can have a career they can be proud of or a life they can be proud
link |
02:38:36.440
of advice?
link |
02:38:37.440
Boy, it's dangerous to give advice because things change so fast, but one central thing
link |
02:38:42.320
I can say, moving up and through academia and whatnot, you will be surrounded by really
link |
02:38:47.880
smart people. What you need to do is be very careful at distinguishing specific critique
link |
02:38:56.280
versus kind of meta advice. What I mean by that is if somebody really smart and successful
link |
02:39:03.840
and obviously competent is giving you specific critiques on what you've done, that's gold.
link |
02:39:11.400
It's an opportunity to hone your craft, to get better at what you're doing, to learn,
link |
02:39:15.200
to find your mistakes. That's great.
link |
02:39:17.520
If they are telling you what you ought to be studying, how you ought to approach things,
link |
02:39:23.080
what is the right way to think about things, you should probably ignore most of that. The
link |
02:39:28.880
reason I make that distinction is that a lot of really successful people are very well
link |
02:39:36.200
calibrated on their own ideas and their own field and their own area. They know exactly
link |
02:39:43.080
what works and what doesn't and what's good and what's bad, but they're not calibrated
link |
02:39:46.460
on your ideas. The things they will say, oh, this is a dumb idea, don't do this and you
link |
02:39:53.040
shouldn't do that, that stuff is generally worse than useless. It can be very demoralizing
link |
02:40:01.940
and really limiting. What I say to people is read very broadly, work really hard, know
link |
02:40:09.080
what you're talking about, take all specific criticism as an opportunity to improve what
link |
02:40:14.220
you're doing and then completely ignore everything else. I just tell you from my own experience,
link |
02:40:21.800
most of what I consider to be interesting and useful things that we've done, very smart
link |
02:40:26.280
people have said, this is a terrible idea, don't do that. I think we just don't know.
link |
02:40:32.960
We have no idea beyond our own. At best, we know what we ought to be doing. We very rarely
link |
02:40:37.720
know what anybody else should be doing.
link |
02:40:39.320
Yeah, and their ideas, their perspective has been also calibrated, not just on their field
link |
02:40:45.240
and specific situation, but also on a state of that field at a particular time in the
link |
02:40:51.520
past. There's not many people in this world that are able to achieve revolutionary success
link |
02:40:57.880
multiple times in their life. Whenever you say somebody very smart, usually what that
link |
02:41:02.680
means is somebody who's smart, who achieved a success at a certain point in their life
link |
02:41:09.120
and people often get stuck in that place where they found success. To be constantly challenging
link |
02:41:14.720
your worldview is a very difficult thing. Also at the same time, probably if a lot of
link |
02:41:23.240
people tell, that's the weird thing about life, if a lot of people tell you that something
link |
02:41:29.480
is stupid or is not going to work, that either means it's stupid, it's not going to work,
link |
02:41:36.160
or it's actually a great opportunity to do something new and you don't know which one
link |
02:41:42.680
it is and it's probably equally likely to be either. Well, I don't know, the probabilities.
link |
02:41:49.920
Depends how lucky you are, depends how brilliant you are, but you don't know and so you can't
link |
02:41:53.400
take that advice as actual data.
link |
02:41:55.680
Yeah, you have to and this is kind of hard to describe and fuzzy, but I'm a firm believer
link |
02:42:03.920
that you have to build up your own intuition. So over time, you have to take your own risks
link |
02:42:09.160
that seem like they make sense to you and then learn from that and build up so that
link |
02:42:13.580
you can trust your own gut about what's a good idea even when, and then sometimes you'll
link |
02:42:18.120
make mistakes and they'll turn out to be a dead end and that's fine, that's science,
link |
02:42:21.560
but what I tell my students is life is hard and science is hard and you're going to sweat
link |
02:42:28.560
and bleed and everything and you should be doing that for ideas that really fire you
link |
02:42:34.880
up inside and really don't let kind of the common denominator of standardized approaches
link |
02:42:44.940
to things slow you down.
link |
02:42:46.800
So you mentioned planaria being in some sense immortal. What's the role of death in life?
link |
02:42:53.480
What's the role of death in this whole process we have? Is it, when you look at biological
link |
02:42:58.760
systems, is death an important feature, especially as you climb up the hierarchy of competency?
link |
02:43:08.000
Boy, that's an interesting question. I think that it's certainly a factor that promotes
link |
02:43:17.320
change and turnover and an opportunity to do something different the next time for a
link |
02:43:24.520
larger scale system. So apoptosis, it's really interesting. I mean, death is really interesting
link |
02:43:29.520
in a number of ways. One is like you could think about like what was the first thing
link |
02:43:33.040
to die? That's an interesting question. What was the first creature that you could say
link |
02:43:37.420
actually died? It's a tough thing because we don't have a great definition for it. So
link |
02:43:42.880
if you bring a cabbage home and you put it in your fridge, at what point are you going
link |
02:43:48.480
to say it's died, right? So it's kind of hard to know. There's one paper in which I talk
link |
02:43:58.880
about this idea that, I mean, think about this and imagine that you have a creature
link |
02:44:04.960
that's aquatic, let's say it's a frog or something or a tadpole, and the animal dies,
link |
02:44:11.680
in the pond it dies for whatever reason. Most of the cells are still alive. So you could
link |
02:44:17.600
imagine that if when it died, there was some sort of breakdown of the connectivity between
link |
02:44:23.200
the cells, a bunch of cells crawled off, they could have a life as amoebas. Some of them
link |
02:44:28.760
could join together and become a xenobot and twiddle around, right? So we know from planaria
link |
02:44:33.780
that there are cells that don't obey the Hayflick limit and just sort of live forever. So you
link |
02:44:37.800
could imagine an organism that when the organism dies, it doesn't disappear, rather the individual
link |
02:44:42.400
cells that are still alive, crawl off and have a completely different kind of lifestyle
link |
02:44:46.280
and maybe come back together as something else, or maybe they don't. So all of this,
link |
02:44:50.080
I'm sure, is happening somewhere on some planet. So death in any case, I mean, we already kind
link |
02:44:57.080
of knew this because the molecules, we know that when something dies, the molecules go
link |
02:45:00.640
through the ecosystem, but even the cells don't necessarily die at that point, they
link |
02:45:05.200
might have another life in a different way. You can think about something like HeLa, right?
link |
02:45:09.720
The HeLa cell line, you know, that has this, that's had this incredible life. There are
link |
02:45:14.400
way more HeLa cells now than there ever been, than there, than there were when, when she
link |
02:45:18.040
was alive.
link |
02:45:19.040
It seems like as the organisms become more and more complex, like if you look at the
link |
02:45:22.240
mammals, their relationship with death becomes more and more complex. So the survival imperative
link |
02:45:29.800
starts becoming interesting and humans are arguably the first species that have invented
link |
02:45:37.400
the fear of death. The understanding that you're going to die, let's put it this way,
link |
02:45:43.120
like long, so not like instinctual, like, I need to run away from the thing that's going
link |
02:45:49.560
to eat me, but starting to contemplate the finiteness of life.
link |
02:45:53.960
Yeah. I mean, one thing, so, so one thing about the human light, cognitive light cone
link |
02:45:59.400
is that for the first, as far as we know, for the first time, you might have goals that
link |
02:46:04.200
are longer than your lifespan, that are not achievable, right? So if you're, if you are,
link |
02:46:08.160
let's say, and I don't know if this is true, but if you're a goldfish and you have a 10
link |
02:46:11.800
minute attention span, I'm not sure if that's true, but let's say, let's say there's some
link |
02:46:14.760
organism with a, with a short kind of cognitive light cone that way, all of your goals are
link |
02:46:20.260
potentially achievable because you're probably going to live the next 10 minutes. So whatever
link |
02:46:23.560
goals you have, they are totally achievable. If you're a human, you could have all kinds
link |
02:46:27.440
of goals that are guaranteed not achievable because they just take too long, like guaranteed
link |
02:46:31.240
you're not going to achieve them. So I wonder if, you know, is that, is that a, you know,
link |
02:46:35.840
like a perennial, you know, sort of thorn in our, in our psychology that drives some,
link |
02:46:39.920
some psychosis or whatever? I have, I have no idea. Another interesting thing about that,
link |
02:46:43.920
actually, I've been thinking about this a lot in the last couple of weeks, this notion
link |
02:46:47.720
of giving up. So you would think that evolutionarily, the most adaptive way of being is that you
link |
02:46:58.480
go, you, you, you, you fight as long as you physically can. And then when you can't, you
link |
02:47:02.960
can't, and there's in, there's this photograph, there's videos you can find of insects are
link |
02:47:06.680
crawling around where like, you know, like, like most of it is already gone, and it's
link |
02:47:10.000
still sort of crawling, you know, like, Terminator style, right? Like, as far as as long as you
link |
02:47:15.240
physically can, you keep going. Mammals don't do that. So a lot of mammals, including rats,
link |
02:47:20.320
have this thing where when, when they think it's a hopeless situation, they literally
link |
02:47:25.780
give up and die when physically, they could have kept going. I mean, humans certainly
link |
02:47:29.060
do this. And there's, there's some like, really unpleasant experiments that the this guy forget
link |
02:47:33.320
his name did with drowning rats, where if he where where rats normally drown after a
link |
02:47:37.960
couple of minutes, but if you teach them that if you just tread water for a couple of minutes,
link |
02:47:41.480
you'll get rescued, they can tread water for like an hour. And so right, and so they literally
link |
02:47:45.360
just give up and die. And so evolutionarily, that doesn't seem like a good strategy at
link |
02:47:49.920
all evolutionarily, since why would you like, what's the benefit ever of giving up, you
link |
02:47:53.320
just do what you can, and you know, one time out of 1000, you'll actually get rescued, right?
link |
02:47:57.400
But this issue of actually giving up suggests some very interesting metacognitive controls
link |
02:48:03.080
where you've now gotten to the point where survival actually isn't the top drive. And
link |
02:48:08.080
that for whatever, you know, there are other considerations that have like taken over.
link |
02:48:11.560
And I think that's uniquely a mammalian thing. But then I don't know.
link |
02:48:15.560
Yeah, the Camus, the existentialist question of why live, just the fact that humans commit
link |
02:48:23.080
suicide is a really fascinating question from an evolutionary perspective.
link |
02:48:27.880
And what was the first and that's the other thing, like, what is the simplest system,
link |
02:48:33.360
whether whether evolved or natural or whatever, that is able to do that? Right? Like, you
link |
02:48:38.760
can think, you know, what other animals are actually able to do that? I'm not sure.
link |
02:48:42.440
Maybe you could see animals over time, for some reason, lowering the value of survive
link |
02:48:49.760
at all costs, gradually, until other objectives might become more important.
link |
02:48:55.560
Maybe. I don't know how evolutionarily how that how that gets off the ground. That just
link |
02:48:59.320
seems like that would have such a strong pressure against it, you know. Just imagine, you know,
link |
02:49:06.600
a population with a lower, you know, if you were a mutant in a population that had less
link |
02:49:13.240
of a less of a survival imperative, would you put your genes outperform the others?
link |
02:49:19.200
Is there such a thing as population selection? Because maybe suicide is a way for organisms
link |
02:49:26.440
to decide themselves that they're not fit for the environment? Somehow?
link |
02:49:31.840
Yeah, that's a that's a really contrary, you know, population level selection is a kind
link |
02:49:36.660
of a deep controversial area. But it's tough because on the face of it, if that was your
link |
02:49:42.840
genome, it wouldn't get propagated because you would die and then your neighbor who didn't
link |
02:49:47.040
have that would would have all the kids.
link |
02:49:49.040
It feels like there could be some deep truth there that we're not understanding. What about
link |
02:49:55.140
you yourself as one biological system? Are you afraid of death?
link |
02:49:59.300
To be honest, I'm more concerned with especially now getting older and having helped a couple
link |
02:50:05.820
of people pass. I think about what's a what's a good way to go? Basically, like nowadays,
link |
02:50:14.880
I don't know what that is, I, you know, sitting in a, you know, a facility that sort of tries
link |
02:50:19.160
to stretch you out as long as you can, that doesn't seem that doesn't seem good. And there's
link |
02:50:24.840
not a lot of opportunities to sort of, I don't know, sacrifice yourself for something useful,
link |
02:50:29.400
right? There's not terribly many opportunities for that in modern society. So I don't know,
link |
02:50:33.640
that's that's that's more of I'm not I'm not particularly worried about death itself.
link |
02:50:38.040
But I've seen it happen. And and it's not it's not pretty. And I don't know what what
link |
02:50:46.380
a better what a better alternative is.
link |
02:50:48.080
So the existential aspect of it does not worry you deeply? The fact that this ride ends?
link |
02:50:56.360
No, it began. I mean, the ride began, right? So there was I don't know how many billions
link |
02:51:01.340
of years before that I wasn't around. So that's okay.
link |
02:51:04.740
But isn't the experience of life? It's almost like feels like you're immortal. Because the
link |
02:51:10.520
way you make plans, the way you think about the future. I mean, if you if you look at
link |
02:51:15.720
your own personal rich experience, yes, you can understand, okay, eventually, I died as
link |
02:51:22.360
people I love that have died. So surely, I will die and it hurts and so on. But like,
link |
02:51:28.960
he sure doesn't. It's so easy to get lost in feeling like this is going to go on forever.
link |
02:51:34.240
Yeah, it's a little bit like the people who say they don't believe in free will, right?
link |
02:51:37.320
I mean, you can say that but but when you go to a restaurant, you still have to pick
link |
02:51:41.680
a soup and stuff. So right, so so I don't know if I know I've actually seen that that
link |
02:51:46.080
happened at lunch with a with a well known philosopher and he didn't believe in free
link |
02:51:49.920
will and the other waitress came around and he was like, Well, let me see. I was like,
link |
02:51:53.600
What are you doing here? You're gonna choose a sandwich, right? So it's I think it's one
link |
02:51:58.200
of those things. I think you can know that, you know, you're not going to live forever.
link |
02:52:02.100
But you can't you can't. It's not practical to live that way unless you know, so you buy
link |
02:52:07.100
insurance and then you do some stuff like that. But but but mostly, you know, I think
link |
02:52:11.920
you just you just live as if as if as if you can make plans.
link |
02:52:17.440
We talked about all kinds of life. We talked about all kinds of embodied minds. What do
link |
02:52:22.520
you think is the meaning of it all? What's the meaning of all the biological lives we've
link |
02:52:28.000
been talking about here on Earth? Why are we here?
link |
02:52:33.280
I don't know that that's a that that's a well posed question other than the existential
link |
02:52:38.920
question you post before.
link |
02:52:40.900
Is that question hanging out with the question of what is consciousness and there at retreat
link |
02:52:47.000
somewhere? Not sure because sipping pina coladas and because they're ambiguously defined.
link |
02:52:55.280
Maybe I'm not sure that any of these things really ride on the correctness of our scientific
link |
02:53:01.660
understanding. But I mean, just just for an example, right? I've always found I've always
link |
02:53:06.740
found it weird that people get really worked up to find out realities about their their
link |
02:53:16.760
bodies, for example. Right. You've seen them. Ex Machina. Right. And so there's this great
link |
02:53:22.820
scene where he's cutting his hand to find out, you know, a piece full of cock. Now,
link |
02:53:26.120
to me, right? If if I open up and I find out and I find a bunch of cogs, my conclusion
link |
02:53:31.880
is not, oh, crap, I must not have true cognition. That sucks. My conclusion is, wow, cogs can
link |
02:53:37.360
have true cognition. Great. So right. So. So it seems to me, I guess I guess I'm with
link |
02:53:42.840
Descartes on this one, that whatever whatever the truth ends up being of of of how is what
link |
02:53:48.240
is consciousness, how it can be conscious? None of that is going to alter my primary
link |
02:53:53.080
experience, which is this is what it is. And if and if a bunch of molecular networks can
link |
02:53:56.600
do it, fantastic. If it turns out that there's a there's a non corporeal, you know, so great.
link |
02:54:03.300
We can we'll study that, whatever. But but the fundamental existential aspect of it is,
link |
02:54:09.200
you know, if somebody if somebody told me today that, yeah, yeah, you were created yesterday
link |
02:54:13.400
and all your memories are, you know, sort of fake, you know, kind of like like like Boltzmann
link |
02:54:18.280
brains, right. And the human, you know, human skepticism, all that. Yeah. OK. Well, so so
link |
02:54:23.280
but but here I am now. So so it's the experience. It's primal, so like that's the that's the
link |
02:54:31.280
thing that matters. So the the backstory doesn't matter. I think so. I think so. From a first
link |
02:54:36.300
person perspective, now from a third person, like scientifically, it's all very interesting.
link |
02:54:39.600
From a third person perspective, I could say, wow, that's that's amazing that that this
link |
02:54:43.760
happens and how does it happen and whatever. But from a first person perspective, I could
link |
02:54:48.000
care less. Like I just it's just what I've what I learned from any of these scientific
link |
02:54:52.020
facts is, OK, well, I guess then that's that that then I guess that's what is sufficient
link |
02:54:57.160
to to give me my, you know, amazing first person perspective. I think if you dig deeper
link |
02:55:01.820
and deeper and get a get surprising answers to why the hell we're here, it might give
link |
02:55:10.100
you some guidance on how to live. Maybe, maybe. I don't know. That would be nice. On the one
link |
02:55:18.680
hand, you might be right, because on the one hand, if I don't know what else could possibly
link |
02:55:23.240
give you that guidance. Right. So so you would think that it would have to be that or you
link |
02:55:26.240
would do it would have to be science because there isn't anything else. So so that's so
link |
02:55:30.400
maybe on the other hand, I am really not sure how you go from any, you know, what they call
link |
02:55:36.680
from an is to an odd right from any factual description of what's going on. This goes
link |
02:55:41.120
back to the natural. Right. Just because somebody says, oh, man, that's that's completely not
link |
02:55:44.920
natural. It's never happened on Earth before. I'm not impressed by that whatsoever. I think
link |
02:55:50.000
I think whatever hazard hasn't happened, we are now in a position to do better if we can.
link |
02:55:56.280
Right. Well, this also because you said there's science and there's nothing else. There it's
link |
02:56:03.680
it's really tricky to know how to intellectually deal with a thing that science doesn't currently
link |
02:56:12.000
understand. Right. So like, the thing is, if you believe that science solves everything,
link |
02:56:22.880
you can too easily in your mind think our current understanding, like, we've solved
link |
02:56:30.280
everything. Right. Right. Right. Like, it jumps really quickly to not science as a mechanism
link |
02:56:36.120
as a as a process, but more like science of today. Like, you could just look at human
link |
02:56:43.000
history and throughout human history, just physicists and everybody would claim we've
link |
02:56:48.640
solved everything. Sure. Sure. Like, like, there's a few small things to figure out.
link |
02:56:53.240
And we basically solved everything. Were in reality, I think asking, like, what is the
link |
02:56:58.480
meaning of life is resetting the palette of like, we might be tiny and confused and don't
link |
02:57:08.120
have anything figured out. It's almost going to be hilarious a few centuries from now when
link |
02:57:12.800
they look back how dumb we were. Yeah, I 100% agree. So when I say science and nothing else,
link |
02:57:21.480
I certainly don't mean the science of today because I think overall, I think we are we
link |
02:57:27.640
know very little. I think most of the things that we're sure of now are going to be, as
link |
02:57:32.400
you said, are going to look hilarious down the line. So I think we're just at the beginning
link |
02:57:36.280
of a lot of really important things. When I say nothing but science, I also include
link |
02:57:42.320
the kind of first person, what I call science that you do. So the interesting thing about
link |
02:57:48.000
I think about consciousness and studying consciousness and things like that in the first person is
link |
02:57:52.120
unlike doing science in the third person, where you as the scientist are minimally changed
link |
02:57:57.760
by it, maybe not at all. So when I do an experiment, I'm still me, there's the experiment, whatever
link |
02:58:01.360
I've done, I've learned something, so that's a small change. But but overall, that's it.
link |
02:58:04.900
In order to really study consciousness, you will you are part of the experiment, you will
link |
02:58:10.640
be altered by that experiment, right? Whatever, whatever it is that you're doing, whether
link |
02:58:13.920
it's some sort of contemplative practice or, or some sort of psychoactive, you know, whatever.
link |
02:58:22.120
You are now you are now your own experiment, and you are right. And so I consider I fold
link |
02:58:26.160
that in, I think that's that's part of it. I think that exploring our own mind and our
link |
02:58:29.960
own consciousness is very important. I think much of it is not captured by what currently
link |
02:58:34.680
is third person science for sure. But ultimately, I include all of that in science, with a capital
link |
02:58:41.520
S in terms of like a, a rational investigation of both first and third person aspects of
link |
02:58:48.800
our world.
link |
02:58:50.300
We are our own experiment, as beautifully put. And when when two systems get to interact
link |
02:58:57.960
with each other, that's the kind of experiment. So I'm deeply honored that you would do this
link |
02:59:03.780
experiment with me today. Thanks so much. I'm a huge fan of your work. Likewise, thank
link |
02:59:07.760
you for doing everything you're doing. I can't wait to see the kind of incredible things
link |
02:59:13.800
you build. So thank you for talking. Really appreciate being here. Thank you.
link |
02:59:18.200
Thank you for listening to this conversation with Michael Levin. To support this podcast,
link |
02:59:22.200
please check out our sponsors in the description. And now let me leave you with some words from
link |
02:59:26.760
Charles Darwin in The Origin of Species. From the war of nature, from famine and death,
link |
02:59:35.760
the most exalted object which we're capable of conceiving, namely, the production of the
link |
02:59:41.000
higher animals directly follows. There's grandeur in this view of life, with its several
link |
02:59:47.600
powers having been originally breathed into a few forms, or into one, and that whilst
link |
02:59:54.880
this planet has gone cycling on according to the fixed laws of gravity, from its most
link |
02:59:59.840
simpler beginning, endless forms, most beautiful and most wonderful, have been and are being
link |
03:00:06.880
evolved. Thank you for listening, and hope to see you next time.