back to index

Bret Weinstein: Truth, Science, and Censorship in the Time of a Pandemic | Lex Fridman Podcast #194


small model | large model

link |
00:00:00.000
The following is a conversation with Brett Weinstein, evolutionary biologist, author,
link |
00:00:05.280
cohost of the Dark Horse podcast, and, as he says, reluctant radical.
link |
00:00:11.120
Even though we've never met or spoken before this, we both felt like we've been friends for a long
link |
00:00:16.480
time. I don't agree on everything with Brett, but I'm sure as hell happy he exists in this weird
link |
00:00:22.720
and wonderful world of ours. Quick mention of our sponsors, Jordan Harbinger Show, ExpressVPN,
link |
00:00:29.840
Magik Spoon, and FourSigmatic. Check them out in the description to support this podcast.
link |
00:00:35.760
As a side note, let me say a few words about COVID 19 and about science broadly. I think science is
link |
00:00:42.080
beautiful and powerful. It is the striving of the human mind to understand and to solve the
link |
00:00:48.240
problems of the world. But as an institution, it is susceptible to the flaws of human nature,
link |
00:00:54.160
to fear, to greed, power, and ego. 2020 is the story of all of these that has both scientific
link |
00:01:02.080
triumph and tragedy. We needed great leaders, and we didn't get them. What we needed is leaders
link |
00:01:08.400
who communicate in an honest, transparent, and authentic way about the uncertainty of what we
link |
00:01:13.360
know and the large scale scientific efforts to reduce that uncertainty and to develop solutions.
link |
00:01:18.800
I believe there are several candidates for solutions that could have all saved hundreds of
link |
00:01:23.760
billions of dollars and lessened or eliminated the suffering of millions of people. Let me mention
link |
00:01:30.800
five of the categories of solutions. Masks, at home testing, anonymized contact tracing,
link |
00:01:36.880
antiviral drugs, and vaccines. Within each of these categories, institutional leaders should
link |
00:01:42.640
have constantly asked and answered publicly, honestly, the following three questions. One,
link |
00:01:49.440
what data do we have on the solution and what studies are we running to get more and better data?
link |
00:01:55.120
Two, given the current data and uncertainty, how effective and how safe is the solution?
link |
00:02:01.120
Three, what is the timeline and cost involved with mass manufacture and distribution of the
link |
00:02:06.160
solution? In the service of these questions, no voices should have been silenced. No ideas
link |
00:02:11.680
left off the table. Open data, open science, open honest scientific communication and debate
link |
00:02:17.520
was the way, not censorship. There are a lot of ideas out there that are bad, wrong, dangerous,
link |
00:02:24.720
but the moment we have the hubris to say we know which ideas those are is the moment we'll lose
link |
00:02:30.720
our ability to find the truth, to find solutions, the very things that make science beautiful
link |
00:02:36.640
and powerful in the face of all the dangers that threaten the well being and the existence of humans
link |
00:02:42.160
on earth. This conversation with Brett is less about the ideas we talk about. We agree on some,
link |
00:02:47.600
disagree on others. It is much more about the very freedom to talk, to think, to share ideas.
link |
00:02:54.880
This freedom is our only hope. Brett should never have been censored. I asked Brett to do this
link |
00:03:01.040
podcast, to show solidarity and to show that I have hope for science and for humanity.
link |
00:03:08.080
This is the Lex Friedman podcast, and here's my conversation with Brett Weinstein.
link |
00:03:14.320
What to you is beautiful about the study of biology, the science, the engineering, the philosophy of
link |
00:03:20.240
it? It's a very interesting question. I must say, at one level, it's not a conscious thing. I can
link |
00:03:27.920
say a lot about why as an adult I find biology compelling, but as a kid, I was completely
link |
00:03:35.120
fascinated with animals. I loved to watch them and think about why they did what they did,
link |
00:03:41.280
and that developed into a very conscious passion as an adult, but I think in the same way that one
link |
00:03:49.920
is drawn to a person, I was drawn to the never ending series of near miracles that exist across
link |
00:04:00.560
biological nature. When you see a living organism, do you see it from an evolutionary biology
link |
00:04:07.440
perspective of this entire thing that moves around in this world, or do you see from an engineering
link |
00:04:12.800
perspective that are like first principles almost down to the physics, the little components that
link |
00:04:19.120
build up hierarchies that you have cells, the first proteins and cells and organs and all
link |
00:04:26.400
that kind of stuff. Do you see low level or do you see high level? Well, the human mind is a
link |
00:04:31.840
strange thing, and I think it's probably a bit like a time sharing machine in which I have
link |
00:04:38.960
different modules. We don't know enough about biology for them to connect, so they exist in
link |
00:04:43.920
isolation, and I'm always aware that they do connect, but I basically have to step into a
link |
00:04:50.080
module in order to see the evolutionary dynamics of the creature and the lineage that it belongs to.
link |
00:04:56.800
I have to step into a different module to think of that lineage over a very long time scale,
link |
00:05:02.080
a different module still to understand what the mechanisms inside would have to look like to account
link |
00:05:07.360
for what we can see from the outside. I think that probably sounds really complicated, but
link |
00:05:17.520
one of the things about being involved in a topic like biology and doing so for
link |
00:05:24.240
one really not even just my adult life or my whole life is that it becomes second nature.
link |
00:05:28.960
And when we see somebody do an amazing parkour routine or something like that,
link |
00:05:36.880
we think about what they must be doing in order to accomplish that. But of course,
link |
00:05:41.760
what they are doing is tapping into some kind of zone, right? They are in a zone in which they are
link |
00:05:48.880
in such command of their center of gravity, for example, that they know how to hurl it around
link |
00:05:55.920
a landscape so that they always land on their feet. And I would just say, for anyone who hasn't
link |
00:06:03.920
found a topic on which they can develop that kind of facility, it is absolutely worthwhile.
link |
00:06:11.360
It's really something that human beings are capable of doing across a wide range of topics.
link |
00:06:16.080
Many things our ancestors didn't even have access to. And that flexibility of humans,
link |
00:06:21.840
that ability to repurpose our machinery for topics that are novel, means really the world
link |
00:06:30.240
is your oyster. You can figure out what your passion is and then figure out all of the angles
link |
00:06:34.640
that one would have to pursue to really deeply understand it. And it is well worth having at
link |
00:06:40.880
least one topic like that. You mean embracing the full adaptability of the both the body and the
link |
00:06:48.480
mind? I don't know what to attribute the parkour to, like biomechanics of how our bodies can move
link |
00:06:56.400
or is it the mind? How much percent wise is it the entirety of the hierarchies of biology that
link |
00:07:04.880
we've been talking about or is it just all the mind? The way to think about creatures is that
link |
00:07:10.880
every creature is two things simultaneously. A creature is a machine of sorts, right? It's not
link |
00:07:18.080
a machine in the, I call it an aqueous machine, right? And it's run by an aqueous computer,
link |
00:07:24.480
right? So it's not identical to our technological machines. But every creature is both a machine
link |
00:07:31.760
that does things in the world sufficient to accumulate enough resources to continue surviving,
link |
00:07:37.840
to reproduce. It is also a potential. So each creature is potentially, for example,
link |
00:07:45.440
the most recent common ancestor of some future clade of creatures that will look very different
link |
00:07:49.520
from it. And if a creature is very, very good at being a creature, but not very good in terms of
link |
00:07:55.440
the potential it has going forward, then that lineage will not last very long into the future
link |
00:08:01.280
because change will throw at challenges that its descendants will not be able to meet. So
link |
00:08:07.360
the thing about humans is we are a generalist platform. And we have the ability to swap out
link |
00:08:15.040
our software to exist in many, many different niches. And I was once watching an interview
link |
00:08:22.720
with this British group of parkour experts who were being, they were discussing what it is they
link |
00:08:29.760
do and how it works. And what they essentially said is, look, you're tapping into deep monkey
link |
00:08:36.320
stuff, right? And I thought, yeah, that's about right. And anybody who is proficient at something
link |
00:08:45.360
like skiing or skateboarding has the experience of flying down the hill on skis, for example,
link |
00:08:55.200
bouncing from the top of one mogul to the next. And if you really pay attention, you will discover
link |
00:09:02.560
that your conscious mind is actually a spectator. It's there. It's involved in the experience,
link |
00:09:07.280
but it's not driving. Some part of you knows how to ski, and it's not the part of you that
link |
00:09:10.560
knows how to think. And I would just say that what accounts for this flexibility in humans
link |
00:09:19.120
is the ability to bootstrap a new software program and then drive it into the unconscious
link |
00:09:26.080
layer where it can be applied very rapidly. And I will be shocked if the exact thing doesn't exist
link |
00:09:33.600
in robotics. If you programmed a robot to deal with circumstances that were novel to it,
link |
00:09:39.360
how would you do it? It would have to look something like this.
link |
00:09:41.840
There's a certain kind of magic. You're right with the consciousness being an observer. When
link |
00:09:47.920
you play guitar, for example, or piano, for me, music, when you get truly lost in it,
link |
00:09:54.000
I don't know what the heck is responsible for the flow of the music, the kind of the loudness of
link |
00:09:59.680
the music going up and down, the timing, the intricate, like even the mistakes, all those
link |
00:10:05.440
things, that doesn't seem to be the conscious mind. It is just observing. And yet it's somehow
link |
00:10:12.240
intricately involved. More, like you mentioned parkour, dances like that too. When you start
link |
00:10:18.240
up in tango dancing, if when you truly lose yourself in it, then it's just like you're an
link |
00:10:25.120
observer and how the hell is the body able to do that? And not only that, it's the physical
link |
00:10:29.680
motion is also creating the emotion, the damn is good to be alive feeling.
link |
00:10:40.640
But then that's also intricately connected to the full biology stack that we're operating in.
link |
00:10:46.640
I don't know how difficult it is to replicate that we're talking offline about Boston Dynamics
link |
00:10:52.000
robots. They've recently been, they did both parkour, they did flips, they've also done some dancing.
link |
00:11:01.280
And it's something I think a lot about because what most people don't realize, because they
link |
00:11:07.600
don't look deep enough is those robots are hard coded to do those things. The robots didn't
link |
00:11:14.320
figure it out by themselves. And yet the fundamental aspect of what it means to be human is that
link |
00:11:20.240
process of figuring out, of making mistakes. And then there's something about overcoming
link |
00:11:25.440
those challenges and the mistakes and like figuring out how to lose yourself in the magic of the
link |
00:11:30.160
dancing or just movement is what it means to be human that learning process. So that's what I
link |
00:11:37.440
want to do with the almost as a fun side thing with the Boston Dynamics robots is to have them
link |
00:11:45.760
learn and see what they figure out even if they make mistakes. I want to let spot make mistakes
link |
00:11:55.040
and in so doing discover what it means to be alive, discover beauty because I think that's
link |
00:12:02.480
the essential aspect of mistakes. Boston Dynamics folks want spot to be perfect
link |
00:12:09.120
because they don't want spot to ever make mistakes because they want to operate in the
link |
00:12:12.880
factories, they want to be very safe and so on. For me, if you construct the environment,
link |
00:12:18.960
if you construct a safe space for robots and allow them to make mistakes, something beautiful
link |
00:12:25.120
might be discovered. But that requires a lot of brain power. So spot is currently very dumb
link |
00:12:31.200
and I'm going to add give it a brain. So first make it see currently can't see meaning computer
link |
00:12:37.440
vision has to understand its environment has to see all the humans, but then also has to be able
link |
00:12:42.720
to learn learn about its movement, learn how to use his body to communicate with others,
link |
00:12:48.960
all those kinds of things that dogs know how to do well, humans know how to do somewhat well.
link |
00:12:54.880
I think that's a beautiful challenge. But first you have to allow the robot to make mistakes.
link |
00:12:59.840
Well, I think your objective is laudable, but you're going to realize that the Boston
link |
00:13:05.840
Dynamics folks are right the first time spot poops on your rug.
link |
00:13:11.360
I hear the same thing about kids and so on. Yes. I still want to have kids.
link |
00:13:14.880
No, you should. It's a great experience. So let me step back into what you said in a couple
link |
00:13:20.080
of different places. One, I've always believed that the missing element in robotics and
link |
00:13:26.560
artificial intelligence is a proper development. It is no accident. It is no mere coincidence
link |
00:13:33.440
that human beings are the most dominant species on planet earth and that we have the longest
link |
00:13:38.160
childhoods of any creature on earth by far. Development is the key to the flexibility.
link |
00:13:44.720
And so the capability of a human at adulthood is the mirror image. It's the flip side of our
link |
00:13:55.600
helplessness at birth. So I'll be very interested to see what happens in your robot project if you
link |
00:14:03.120
do not end up reinventing childhood for robots, which of course is foreshadowed in 2001 quite
link |
00:14:09.840
brilliantly. But I also want to point out you can see this issue of your conscious mind becoming
link |
00:14:18.000
a spectator very well if you compare tennis to table tennis. If you watch a tennis game,
link |
00:14:26.960
you could imagine that the players are highly conscious as they play. You cannot imagine
link |
00:14:33.520
that if you've ever played ping pong decently. A volley in ping pong is so fast that your conscious
link |
00:14:40.000
mind, if your reactions had to go through your conscious mind, you wouldn't be able to play.
link |
00:14:44.880
So you can detect that your conscious mind, while very much present, isn't there. And you can also
link |
00:14:50.720
detect where consciousness does usefully intrude. If you go up against an opponent in table tennis
link |
00:14:57.600
that knows a trick that you don't know how to respond to, you will suddenly detect that something
link |
00:15:03.360
about your game is not effective. And you will start thinking about what might be, how do you
link |
00:15:08.560
position yourself so that move that puts the ball just in that corner of the table or something
link |
00:15:12.880
like that doesn't catch you off guard. And this, I believe, is we highly conscious folks,
link |
00:15:22.000
those of us who try to think through things very deliberately and carefully, mistake consciousness
link |
00:15:27.440
for the highest kind of thinking. And I really think that this is an error. Consciousness is an
link |
00:15:34.000
intermediate level of thinking. What it does is it allows you, it's basically like uncompiled code.
link |
00:15:40.000
And it doesn't run very fast. It is capable of being adapted to new circumstances. But once the
link |
00:15:45.920
code is roughed in, right, it gets driven into the unconscious layer, and you become highly
link |
00:15:51.360
effective at whatever it is. And that from that point, your conscious mind basically remains
link |
00:15:56.080
there to detect things that aren't anticipated by the code you've already written. And so
link |
00:16:01.440
I don't exactly know how one would establish this, how one would demonstrate it. But it
link |
00:16:07.360
must be the case that the human mind contains sandboxes in which things are tested, right?
link |
00:16:15.200
Maybe you can build a piece of code and run it in parallel next to your active code so you can
link |
00:16:20.080
see how it would have done comparatively. But there's got to be some way of writing new code
link |
00:16:26.400
and then swapping it in. And frankly, I think this has a lot to do with things like sleep cycles.
link |
00:16:31.120
Very often, you know, when I get good at something, I often don't get better at it while I'm doing it.
link |
00:16:36.640
I get better at it when I'm not doing it, especially if there's time to sleep and think on it.
link |
00:16:41.600
So there's some sort of, you know, new program swapping in for old program phenomenon,
link |
00:16:46.880
which, you know, will be a lot easier to see in machines. It's going to be hard with the wetwear.
link |
00:16:53.920
I like, I mean, it is true because somebody that played, I played tennis for many years,
link |
00:16:58.640
I do still think the highest form of excellence in tennis is when the conscious mind is a
link |
00:17:04.000
spectator. So it's the compiled code is the highest form of being human. And then consciousness is
link |
00:17:12.800
just some like specific compiler. You just have like Borland C++ compiler. You could just have
link |
00:17:20.640
different kind of compilers. Ultimately, the thing that by which we measure the power of life,
link |
00:17:29.440
the intelligence of life is the compiled code. And you can probably do that compilation all kinds
link |
00:17:33.680
of ways. Yeah. I'm not saying that tennis is played consciously and table tennis isn't. I'm
link |
00:17:38.960
saying that because tennis is slowed down by the just the space on the court, you could imagine
link |
00:17:45.280
that it was your conscious mind playing. But when you shrink the court down, it becomes obvious
link |
00:17:50.400
that your conscious mind is just present rather than knowing where to put the paddle. And weirdly
link |
00:17:55.760
for me, I would say this probably isn't true in a podcast situation. But if I have to give a
link |
00:18:02.400
presentation, especially if I have not overly prepared, I often find the same phenomenon
link |
00:18:08.560
when I'm giving the presentation, my conscious mind is there watching some other part of me
link |
00:18:12.480
present, which is a little jarring, I have to say. Well, that means you've you've gotten good at it.
link |
00:18:20.560
Not let the conscious mind get in the way of the flow of words.
link |
00:18:24.640
Yeah, that's that's the sensation to be sure. And that's the highest form of podcasting too.
link |
00:18:28.880
I mean, that's why I have that that's what it looks like when a podcast is really in the pocket
link |
00:18:34.080
like like Joe Rogan just having fun and just losing themselves. And that's something I aspire to
link |
00:18:41.360
as well, just losing yourself in conversation. Somebody that has a lot of anxiety with people
link |
00:18:45.520
like I'm such an introvert. I'm scared. I was scared before you showed up. I'm scared right now.
link |
00:18:50.800
There's just anxiety. There's just it's a giant mess. It's hard to lose yourself. It's hard to
link |
00:18:56.960
just get out of the way of your own mind. Yeah, actually, trust is a big component of that. Your
link |
00:19:05.440
conscious mind retains control if you are very uncertain. But when you do when you do get into
link |
00:19:12.240
that zone, when you're speaking, I realize it's different for you with English as a second language,
link |
00:19:16.560
although maybe you present in Russian and, you know, and it happens. But do you ever hear yourself
link |
00:19:21.840
say something and you think, oh, that's really good, right? Like, like you didn't come up with it?
link |
00:19:26.880
Some other part of you that you don't exactly know came up with it? I don't think I've ever
link |
00:19:33.680
heard myself in that way because I have a much louder voice that's constantly yelling in my head
link |
00:19:40.400
at why the hell did you say that? There's a very self critical voice. That's much louder.
link |
00:19:46.640
So I'm very maybe I need to deal with that voice. But it's been like, what is it called,
link |
00:19:52.960
like a megaphone just screaming so I can't hear. Oh, no, it says, good job. You said that thing
link |
00:19:57.760
really nicely. So I'm kind of focused right now on the megaphone person in the audience versus
link |
00:20:04.160
the deposit. But that's definitely something to think about. It's been productive. But,
link |
00:20:10.560
you know, the place where I find gratitude and beauty and appreciation of life is in the quiet
link |
00:20:15.920
moments when I don't talk, when I listen to the world around me, when I listen to others,
link |
00:20:23.040
when I talk, I'm extremely self critical in my mind. When I when I produce anything out into
link |
00:20:28.800
the world that's that originated with me, like any kind of creation, extremely self critical,
link |
00:20:34.880
it's good for productivity, for like always striving to improve and so on. It might be bad for
link |
00:20:41.760
for like, just appreciating the things you've created. I'm a little bit with Marvin
link |
00:20:48.640
Minsky on this, where he says the key to to a productive life is to hate everything you've
link |
00:20:55.760
ever done in the past. I didn't know he said that. I must say I resonate with it a bit.
link |
00:21:02.160
And, you know, I, unfortunately, my life currently has me putting a lot of stuff into the world.
link |
00:21:07.920
And, I effectively watch almost none of it. I can't stand it.
link |
00:21:14.960
Yeah. What what do you make of that? I don't know. I just recently, I just yesterday read
link |
00:21:20.480
Metamorphosis by Kafka, reread Metamorphosis by Kafka, where he turns into a giant bug
link |
00:21:25.840
because of the stress that the world puts on him, his parents put on him to succeed.
link |
00:21:30.720
And, you know, I think that's you have to find the balance because if you if you allow the
link |
00:21:37.600
self critical voice to become too heavy, the burden of the world, the pressure
link |
00:21:42.800
that the world puts on you to be the best version of yourself and so on to strive,
link |
00:21:47.280
then you become a bug and that's a big problem. And then and then the world turns against you
link |
00:21:54.560
because you're a bug. You become some kind of caricature of yourself. I don't know.
link |
00:22:01.840
You become the worst version of yourself and then thereby end up destroying yourself and then
link |
00:22:08.080
the world moves on. That's the story. That's a lovely story. I do think this is one of these
link |
00:22:14.080
places and frankly, you could map this onto all of modern human experience, but this is one of
link |
00:22:20.160
these places where our ancestral programming does not serve our modern selves. So I used to
link |
00:22:26.160
talk to students about the question of dwelling on things. Dwelling on things is famously
link |
00:22:32.960
understood to be bad and it can't possibly be bad. It wouldn't exist, the tendency toward it
link |
00:22:38.160
wouldn't exist if it was bad. So what is bad is dwelling on things past the point of utility.
link |
00:22:44.560
And that's obviously easier to say than to operationalize, but if you realize that your
link |
00:22:51.440
dwelling is the key in fact to upgrading your program for future well being and that there's
link |
00:22:58.240
a point presumably from diminishing returns, if not counter productivity, there is a point at
link |
00:23:04.400
which you should stop because that is what is in your best interest, then knowing that you're
link |
00:23:09.760
looking for that point is useful. This is the point at which it is no longer useful for me
link |
00:23:14.240
to dwell on this error I have made. That's what you're looking for. And it also gives you license.
link |
00:23:20.720
If some part of you feels like it's punishing you rather than searching, then that also has a
link |
00:23:27.520
point at which it's no longer valuable and there's some liberty in realizing, yep,
link |
00:23:33.200
even the part of me that was punishing me knows it's time to stop.
link |
00:23:37.200
So if we map that onto compiled code discussion as a computer science person, I find that very
link |
00:23:42.160
compelling. When you compile code, you get warnings sometimes. And usually, if you're a
link |
00:23:52.960
good software engineer, you're going to make sure there's no, you treat warnings as errors.
link |
00:23:58.800
So you make sure that the compilation produces no warnings. But at a certain point when you have
link |
00:24:03.200
a large enough system, you just let the warnings go. It's fine. I don't know where that warning
link |
00:24:09.200
came from. But ultimately, you need to compile the code and run with it. And I hope nothing terrible
link |
00:24:18.080
happens. Well, I think what you will find, and believe me, I think what you're talking about
link |
00:24:24.400
with respect to robots and learning is going to end up having to go to a deep developmental
link |
00:24:31.440
state and a helplessness that evolves into hyper competence and all of that.
link |
00:24:35.440
But I live, I noticed that I live by something that I, for lack of a better descriptor, call
link |
00:24:44.560
the theory of close calls. And the theory of close calls says that people typically miscategorize
link |
00:24:53.520
the events in their life where something almost went wrong. And, you know, for example, if you,
link |
00:25:01.040
I have a friend who I was walking down the street with my college friends and one of my
link |
00:25:06.320
friends stepped into the street thinking it was clear and was nearly hit by a car going 45 miles
link |
00:25:11.680
an hour, would have been an absolute disaster. Might have killed her, certainly would have
link |
00:25:16.080
permanently injured her. But she didn't, you know, car didn't touch her, right? Now, you could walk
link |
00:25:23.040
away from that and think nothing of it because, well, what is there to think nothing happened?
link |
00:25:28.080
Or you could think, well, what is the difference between what did happen and my death?
link |
00:25:33.440
The difference is luck. I never want that to be true, right? I never want the difference
link |
00:25:38.480
between what did happen and my death to be luck. Therefore, I should count this as very close to
link |
00:25:44.160
death. And I should prioritize coding so it doesn't happen again at a very high level.
link |
00:25:49.680
So anyway, my basic point is the accidents and disasters and misfortune describe a distribution
link |
00:26:00.160
that tells you what's really likely to get you in the end. And so, personally, you can use them to
link |
00:26:08.400
figure out where the dangers are so that you can afford to take great risks because you have a really
link |
00:26:13.280
good sense of how they're going to go wrong. But I would also point out civilization has this
link |
00:26:17.280
problem. Civilization is now producing these events that are major disasters, but they're not
link |
00:26:24.720
existential scale yet, right? They're very serious errors that we can see. And I would argue that
link |
00:26:30.560
the pattern is you discover that we are involved in some industrial process at the point it has gone
link |
00:26:36.080
wrong, right? So I'm now always asking the question, okay, in light of the Fukushima
link |
00:26:43.360
Triple Meltdown, the financial collapse of 2008, the Deepwater Horizon blowout COVID 19 and its
link |
00:26:51.760
probable origins in the Wuhan lab, what processes do I not know the name of yet that I will discover
link |
00:26:59.760
at the point that some gigantic accident has happened? And can we talk about the wisdom or
link |
00:27:04.960
lack thereof of engaging in that process before the accident, right? That's what a wise civilization
link |
00:27:10.080
would be doing. And yet we don't. I just want to mention something that happened a couple of days
link |
00:27:15.680
ago. I don't know if you know who JB Straubel is. He's the cofounder of Tesla, CTO of Tesla for many,
link |
00:27:22.880
many years. His wife just died. She was riding a bicycle. And in the same thin line between death
link |
00:27:34.240
and life that many of us have been in where you walk into the intersection and there's this close
link |
00:27:40.320
call. Every once in a while, you get the short straw. I wonder how much of our own individual lives
link |
00:27:54.800
and the entirety of the human civilization rests on this little roll of the dice.
link |
00:27:59.360
Well, this is sort of my point about the close calls is that there's a level at which we can't
link |
00:28:05.600
control it, right? The gigantic asteroid that comes from deep space that you don't have time
link |
00:28:11.840
to do anything about. There's not a lot we can do to hedge that out, or at least not short term.
link |
00:28:17.920
But there are lots of other things. Obviously, the financial collapse of 2008 didn't
link |
00:28:24.080
break down the entire world economy. It threatened to, but a Herculean effort managed to pull us back
link |
00:28:30.000
from the brink. The triple meltdown at Fukushima was awful. But every one of the seven fuel pools
link |
00:28:36.720
held, there wasn't a major fire that made it impossible to manage the disaster going forward.
link |
00:28:41.520
We got lucky. We could say the same thing about the blowout at the Deepwater horizon,
link |
00:28:49.520
where a hole in the ocean floor large enough that we couldn't have plugged it could have opened up.
link |
00:28:54.240
All of these things could have been much, much worse. And I think we can say the same thing
link |
00:28:58.480
about COVID as terrible as it is. And we cannot say for sure that it came from the Wuhan lab,
link |
00:29:04.560
but there's a strong likelihood that it did. And it also could be much, much worse.
link |
00:29:10.240
So in each of these cases, something is telling us we have a process that is unfolding that keeps
link |
00:29:16.800
creating risks where it is luck that is the difference between us and some scale of disaster
link |
00:29:21.040
that is unimaginable. And that wisdom, you can be highly intelligent and cause these disasters.
link |
00:29:28.720
To be wise is to stop causing them. And that would require a process of restraint,
link |
00:29:36.400
a process that I don't see a lot of evidence of yet. So I think we have to generate it.
link |
00:29:41.760
And somehow, at the moment, we don't have a political structure that would be capable of
link |
00:29:50.640
taking a protective algorithm and actually deploying it, because it would have important
link |
00:29:56.560
economic consequences. And so it would almost certainly be shot down. But we can obviously
link |
00:30:01.760
also say, we paid a huge price for all of the disasters that I've mentioned. And we have to
link |
00:30:10.240
factor that into the equation. Something can be very productive short term and very destructive
link |
00:30:14.640
long term. Also, the question is how many disasters we avoided because of the ingenuity
link |
00:30:22.880
of humans or just the integrity and character of humans? That's sort of an open question.
link |
00:30:30.160
We may be more intelligent than lucky. That's the hope. Because the optimistic message here that
link |
00:30:39.120
you're getting at is maybe the process that we should be, that maybe we can overcome luck
link |
00:30:46.960
with ingenuity. Meaning, I guess you're suggesting the process is we should be listing all the ways
link |
00:30:53.600
that human civilization can destroy itself, assigning likelihood to it, and thinking through,
link |
00:31:01.200
how can we avoid that? And being very honest with the data out there about the close calls
link |
00:31:08.400
and using those close calls to then create sort of mechanism by which we minimize the
link |
00:31:15.520
probability of those close calls. And just being honest and transparent with the data that's out
link |
00:31:22.160
there. Well, I think we need to do a couple things for it to work. So I've been an advocate for the
link |
00:31:29.600
idea that sustainability is actually, it's difficult to operationalize, but it is an objective
link |
00:31:34.480
that we have to meet if we're to be around long term. And I realized that we also need to have
link |
00:31:40.480
reversibility of all of our processes because processes very frequently when they start do
link |
00:31:46.320
not appear dangerous. And then when they scale, they become very dangerous. So for example,
link |
00:31:52.240
if you imagine the first internal combustion engine in a vehicle driving down the street,
link |
00:31:59.520
and you imagine somebody running after them saying, hey, if you do enough of that,
link |
00:32:02.800
you're going to alter the atmosphere and it's going to change the temperature of the planet,
link |
00:32:05.840
it's preposterous, right? Why would you stop the person who's invented this marvelous new
link |
00:32:10.160
contraption? But of course, eventually you do get to the place where you're doing enough of this
link |
00:32:14.400
that you do start changing the temperature of the planet. So if we built the capacity, if we
link |
00:32:20.560
basically said, look, you can't involve yourself in any process that you couldn't reverse if you
link |
00:32:26.320
had to, then progress would be slowed, but our safety would go up dramatically. And I think
link |
00:32:35.840
in some sense, if we are to be around long term, we have to begin thinking that way. We're just
link |
00:32:40.560
involved in too many very dangerous processes. So let's talk about one of the things that,
link |
00:32:46.960
if not threatened human civilization, certainly hurt it at a deep level, which is COVID 19.
link |
00:32:56.640
What percent probability would you currently place
link |
00:33:00.240
on the hypothesis that COVID 19 leaked from the Wuhan Institute of Virology?
link |
00:33:06.160
So I maintain a flow chart of all the possible explanations. And it doesn't break down exactly
link |
00:33:13.360
that way. The likelihood that it emerged from a lab is very, very high. If it emerged from a lab,
link |
00:33:21.200
the likelihood that the lab was the Wuhan Institute is very, very high.
link |
00:33:27.840
There are multiple different kinds of evidence that point to the lab. And there is literally no
link |
00:33:33.120
evidence that points to nature. Either the evidence points nowhere or it points to the lab. And the
link |
00:33:38.320
lab could mean any lab, but geographically, obviously, the labs in Wuhan are the most likely.
link |
00:33:44.720
And the lab that was most directly involved with research on viruses that look like COVID,
link |
00:33:50.320
that look like SARS COVID 2, is obviously the place that one would start. But I would say
link |
00:33:56.720
the likelihood that this virus came from a lab is well above 95%. We can talk about the question of
link |
00:34:05.520
could a virus have been brought into the lab and escaped from there without being modified?
link |
00:34:09.760
That's also possible. But it doesn't explain any of the anomalies in the genome of SARS COVID 2.
link |
00:34:17.840
Could it have been delivered from another lab? Could Wuhan be a distraction
link |
00:34:22.960
in order that we would connect the dots in the wrong way? That's conceivable. I currently have
link |
00:34:28.240
that below 1% on my flow chart. But I think very dark thought that somebody would do that almost as
link |
00:34:34.560
a political attack on China. Well, it depends. I don't even think that's one possibility.
link |
00:34:42.240
Sometimes when Eric and I talk about these issues, we will generate a scenario just to prove that
link |
00:34:48.320
something could live in that space. It's a placeholder for whatever may actually have happened.
link |
00:34:53.680
And so it doesn't have to have been an attack on China. That's certainly one possibility.
link |
00:34:59.200
But I would point out, if you can predict the future in some unusual way better than others,
link |
00:35:08.400
you can print money. That's what markets that allow you to bet for or against
link |
00:35:14.240
virtually any sector allow you to do. So you can imagine simply a moral
link |
00:35:21.760
person or entity generating a pandemic attempting to cover their tracks because it would allow them
link |
00:35:29.120
to bet against things like cruise ships, air travel, whatever it is and bet in favor of,
link |
00:35:36.800
I don't know, sanitizing gel and whatever else you would do. So am I saying that I think somebody
link |
00:35:45.840
did that? No, I really don't think it happened. We've seen zero evidence that this was intentionally
link |
00:35:49.920
released. However, were it to have been intentionally released by somebody who did not know, did not
link |
00:35:56.400
want it known where it had come from. Releasing it in Wuhan would be one way to cover their tracks.
link |
00:36:01.760
So we have to leave the possibility formally open, but acknowledge there's no evidence.
link |
00:36:07.280
And the probability, therefore, is low. I tend to believe maybe this is the optimistic nature
link |
00:36:13.600
that I have that people who are competent enough to do the kind of thing we just described
link |
00:36:21.600
are not going to do that because it requires a certain kind of, I don't want to use the word
link |
00:36:26.880
eval, but whatever word you want to use to describe the kind of this regard for human life
link |
00:36:33.600
required to do that. It's just not going to be coupled with competence. I feel like there's a
link |
00:36:41.120
trade off chart where competence on one axis and evils on the other. And the more evil you become,
link |
00:36:48.640
the crapper you are at doing great engineering scientific work required to deliver weapons of
link |
00:36:55.680
different kinds, whether it's bioweapons or nuclear weapons, all those kinds of things.
link |
00:36:59.680
That seems to be the lessons I take from history, but that doesn't necessarily mean that's what's
link |
00:37:05.040
going to be happening in the future. But to stick on the lab leak idea, because the flowchart is
link |
00:37:12.000
probably huge here, because there's a lot of fascinating possibilities. One question I want
link |
00:37:16.640
to ask is what would evidence for natural origins look like? So one piece of evidence for natural
link |
00:37:24.480
origins is that it's happened in the past that viruses have jumped. Oh, they do jump. So that's
link |
00:37:36.880
possible to have happened. So that's a sort of like a historical evidence like, okay, well,
link |
00:37:43.520
it's possible that it happened. It's not evidence of the kind you think it is. It's a justification
link |
00:37:49.680
for a presumption. So the presumption upon discovering a new virus circulating is certainly
link |
00:37:55.600
that it came from nature. The problem is the presumption evaporates in the face of evidence,
link |
00:38:01.920
or at least it logically should. And it didn't in this case. It was maintained by people who
link |
00:38:07.600
privately in their emails acknowledged that they had grave doubts about the natural origin of this
link |
00:38:13.520
virus. Is there some other piece of evidence that we could look for and see that would say
link |
00:38:21.680
this increases the probability that it's natural origins? Yeah. In fact, there is evidence. I always
link |
00:38:28.160
worry that somebody is going to make up some evidence in order to reverse the flow. Well,
link |
00:38:35.760
let's say I am a lot of incentive for that, actually, there's a huge amount of incentive. On
link |
00:38:39.600
the other hand, why didn't the powers that be the powers that lied to us about weapons of mass
link |
00:38:45.360
destruction in Iraq? Why didn't they ever fake weapons of mass destruction in Iraq? Whatever
link |
00:38:50.000
force it is, I hope that force is here too. And so whatever evidence we find is real.
link |
00:38:54.800
It's the competence thing I'm talking about. But okay, go ahead. Sorry.
link |
00:38:58.720
Well, we can get back to that. But I would say, yeah, the giant piece of evidence that will shift
link |
00:39:04.560
the probabilities in the other direction is the discovery of either a human population in which
link |
00:39:10.720
the virus circulated prior to showing up in Wuhan that would explain where the virus learned all
link |
00:39:16.320
of the tricks that it knew instantly upon spreading from Wuhan. So that would do it or
link |
00:39:22.400
an animal population in which an ancestor epidemic can be found in which the virus learned this
link |
00:39:29.280
before jumping to humans. But I'd point out in that second case, you would certainly expect to see
link |
00:39:35.040
a great deal of evolution in the early epidemic, which we don't see. So there almost has to be
link |
00:39:41.760
a human population somewhere else that had the virus circulating or an ancestor of the virus
link |
00:39:46.720
that we first saw in Wuhan circulating. And it has to have gotten very sophisticated in that prior
link |
00:39:51.760
epidemic before hitting Wuhan in order to explain the total lack of evolution and extremely effective
link |
00:39:57.920
virus that emerged at the end of 2019. So you don't believe in the magic of evolution to
link |
00:40:03.680
spring up with all the tricks already there? Like everybody who doesn't have the tricks,
link |
00:40:07.280
they die quickly. And then you just have this beautiful virus that comes in with the spike
link |
00:40:12.720
protein and the through mutation and selection. The ones that succeed and succeed big are the
link |
00:40:23.760
ones that are going to just spring into life with the tricks. Well, no. That's called a hopeful
link |
00:40:29.840
monster and hopeful monsters don't work. The job of becoming a new pandemic virus is too difficult.
link |
00:40:37.520
It involves two very difficult steps and they both have to work. One is the ability to infect a
link |
00:40:42.160
person and spread in their tissues sufficient to make an infection. And the other is to jump between
link |
00:40:48.080
individuals at a sufficient rate that it doesn't go extinct for one reason or another.
link |
00:40:52.480
Those are both very difficult jobs. They require, as you describe, selection. And the point is,
link |
00:40:58.720
selection would leave a mark. We would see evidence that it would stay in place.
link |
00:41:02.160
In animals or humans, we would see both, right?
link |
00:41:05.200
And we see this evolutionary trace of the virus gathering the tricks out.
link |
00:41:10.560
Yeah. You would see the virus, you would see the clumsy virus get better and better. And yes,
link |
00:41:14.480
I am a full believer in the power of that process. In fact, I believe it. What I know
link |
00:41:20.160
from studying the process is that it is much more powerful than most people imagine, that what we
link |
00:41:25.600
teach in the Evolution 101 textbook is too clumsy a process to do what we see it doing and that
link |
00:41:32.240
actually people should increase their expectation of the rapidity with which that process can produce
link |
00:41:39.600
just jaw dropping adaptations. That said, we just don't see evidence that it happened here,
link |
00:41:44.880
which doesn't mean it doesn't exist. But it means, in spite of immense pressure to find it somewhere,
link |
00:41:50.560
there's been no hint, which probably means it took place inside of a laboratory.
link |
00:41:55.520
So inside the laboratory, gain a function research on viruses. And I believe most of
link |
00:42:03.680
that kind of research is doing this exact thing that you're referring to, which is
link |
00:42:07.600
accelerated evolution. And just watching evolution do its thing on a bunch of viruses
link |
00:42:12.480
and seeing what kind of tricks get developed. The other method is engineering viruses. So
link |
00:42:22.640
manually adding on the tricks. Which do you think we should be thinking about here?
link |
00:42:30.880
So mind you, I learned what I know in the aftermath of this pandemic emerging. I
link |
00:42:36.480
started studying the question. And I would say, based on the content of the genome and other
link |
00:42:42.800
evidence in publications from the various labs that were involved in generating this technology,
link |
00:42:50.240
a couple of things seem likely. This SARS CoV2 does not appear to be entirely the result of
link |
00:42:58.400
either a splicing process or serial passaging. It appears to have both things in its past.
link |
00:43:07.120
Or it's at least highly likely that it does. So for example, the Fern cleavage site
link |
00:43:11.680
looks very much like it was added in to the virus. And it was known that that would increase its
link |
00:43:17.200
infectivity in humans and increase its tropism. The virus appears to be excellent at spreading
link |
00:43:28.880
in humans and minks and ferrets. Now minks and ferrets are very closely related to each other
link |
00:43:34.880
and ferrets are very likely to have been used in a serial passage experiment. The reason being
link |
00:43:40.000
that they have an ACE2 receptor that looks very much like the human ACE2 receptor. And so
link |
00:43:44.640
were you going to passage the virus or its ancestor through an animal in order to increase
link |
00:43:50.640
its infectivity in humans, which would have been necessary, ferrets would have been very
link |
00:43:55.280
likely. It is also quite likely that humanized mice were utilized. And it is possible that human
link |
00:44:02.800
airway tissue was utilized. I think it is vital that we find out what the protocols were. If this
link |
00:44:09.200
came from the Wuhan Institute, we need to know it and we need to know what the protocols were
link |
00:44:13.840
exactly because they will actually give us some tools that would be useful in fighting SARS CoV2
link |
00:44:20.800
and hopefully driving it to extinction, which ought to be our priority. It is a priority that
link |
00:44:25.440
does not, it is not apparent from our behavior, but it really is. It should be our objective if we
link |
00:44:31.440
understood where our interests lie. We would be much more focused on it. But those protocols would
link |
00:44:38.000
tell us a great deal. If it wasn't the Wuhan Institute, we need to know that. If it was nature,
link |
00:44:42.960
we need to know that. And if it was some other laboratory, we need to figure out
link |
00:44:47.360
what and where so that we can determine what we can determine about what was done.
link |
00:44:53.040
You're opening up my mind about why we should investigate, why we should know the truth
link |
00:44:59.120
of the origins of this virus. So for me personally, let me just tell the story of my own kind of journey.
link |
00:45:05.680
When I first started looking into the lab leak hypothesis, what became terrifying to me and
link |
00:45:15.120
important to understand and obvious is the sort of like Sam Harris way of thinking, which is
link |
00:45:23.600
it's obvious that a lab leak of a deadly virus will eventually happen. My mind was,
link |
00:45:30.320
it doesn't even matter if it happened in this case. It's obvious it's going to happen in the future.
link |
00:45:36.560
So why the hell are we not freaking out about this? And COVID 19 is not even that deadly
link |
00:45:41.760
relative to the possible future viruses. It's the way I disagree with Sam on this,
link |
00:45:46.880
but he thinks about this way about AGI as well, not about artificial intelligence. It's a different
link |
00:45:52.160
discussion, I think, but with viruses, it seems like something that could happen on the scale of
link |
00:45:56.800
years, maybe a few decades. AGI is a little bit farther out for me, but it seemed the terrifying
link |
00:46:03.280
thing seemed obvious that this will happen very soon for a much deadlier virus as we get better
link |
00:46:10.080
and better at both engineering viruses and doing this kind of evolutionary driven research,
link |
00:46:15.920
gain a function research. Okay, but then you started speaking out about this as well, but
link |
00:46:22.640
also started to say, no, no, no, we should hurry up and figure out the origins now because they
link |
00:46:27.440
will help us figure out how to actually respond to this particular virus, how to treat this
link |
00:46:35.680
particular virus, what is in terms of vaccines, in terms of antiviral drugs, in terms of just
link |
00:46:40.960
hand, although the number of responses we should have. Okay, I still am much more freaking out
link |
00:46:50.960
about the future. Maybe you can break that apart a little bit. Which are you most focused on now?
link |
00:47:02.560
Which are you most freaking out about now in terms of the importance of figuring out the origins of
link |
00:47:08.800
this virus? I am most freaking out about both of them because they're both really important and
link |
00:47:15.040
we can put bounds on this. Let me say first that this is a perfect test case for the theory of
link |
00:47:21.040
close calls because as much as COVID is a disaster, it is also a close call from which we can learn
link |
00:47:26.880
much. You are absolutely right. If we keep playing this game in the lab, especially if we do it
link |
00:47:34.880
under pressure and when we are told that a virus is going to leap from nature any day and that the
link |
00:47:39.920
more we know, the better we'll be able to fight it, we're going to create the disaster sooner.
link |
00:47:45.200
Yes, that should be an absolute focus. The fact that there were people saying that this was dangerous
link |
00:47:51.280
back in 2015 ought to tell us something. The fact that the system bypassed a ban and offshored the
link |
00:47:58.480
work to China ought to tell us this is not a Chinese failure. This is a failure of something
link |
00:48:03.280
larger and harder to see. But I also think that there's a clock ticking with respect to SARS
link |
00:48:13.280
CoV2 and COVID, the disease that it creates. That has to do with whether or not we are stuck
link |
00:48:18.880
with it permanently. If you think about the cost to humanity of being stuck with influenza,
link |
00:48:24.800
it's an immense cost year after year. We just stopped thinking about it because it's there.
link |
00:48:29.600
Some years you get to flu, most years you don't. Maybe you get the vaccine to prevent it. Maybe
link |
00:48:34.320
the vaccine isn't particularly well targeted. But imagine just simply doubling that cost.
link |
00:48:40.960
Imagine we get stuck with SARS CoV2 and its descendants going forward and that
link |
00:48:47.120
it just settles in and becomes a fact of modern human life. That would be a disaster. The number
link |
00:48:53.040
of people we will ultimately lose is incalculable. The amount of suffering that will be caused is
link |
00:48:57.520
incalculable, the loss of well being and wealth incalculable. That ought to be a very high priority
link |
00:49:04.720
driving this extinct before it becomes permanent. The ability to drive extinct
link |
00:49:10.800
goes down the longer we delay effective responses to the extent that we let it have this very large
link |
00:49:18.320
canvas, large numbers of people who have the disease in which mutation and selection can result
link |
00:49:24.160
in adaptation that we will not be able to counter the greater its ability to figure out features
link |
00:49:29.440
of our immune system and use them to its advantage. I'm feeling the pressure of driving an extinct.
link |
00:49:37.200
I believe we could have driven an extinct six months ago and we didn't do it because of very
link |
00:49:42.240
mundane concerns among a small number of people. I'm not alleging that they were brazen about
link |
00:49:49.920
or that they were callous about deaths that would be caused. I have the sense that they
link |
00:49:56.080
were working from a kind of autopilot in which let's say you're in some kind of a
link |
00:50:01.920
pharmaceutical corporation. You have a portfolio of therapies that in the context of a pandemic
link |
00:50:10.000
might be very lucrative. Those therapies have competitors. You of course want to position
link |
00:50:14.960
your product so that it succeeds and the competitors don't and lo and behold at some point through
link |
00:50:21.760
means that I think those of us on the outside can't really into it. You end up saying things
link |
00:50:28.480
about competing therapies that work better and much more safely than the ones you're selling
link |
00:50:33.840
that aren't true and do cause people to die in large numbers, but it's some kind of autopilot,
link |
00:50:41.280
at least part of it is. There's a complicated coupling of the autopilot of institutions,
link |
00:50:51.200
companies, governments, and then there's also the geopolitical game theory thing going on
link |
00:50:58.320
where you want to keep secrets. It's the Chernobyl thing where if you messed up,
link |
00:51:03.520
there's a big incentive, I think, to hide the fact that you messed up. So how do we fix this?
link |
00:51:12.560
And what's more important to fix? The autopilot, which is the response that we often criticize
link |
00:51:19.680
about our institutions, especially the leaders in those institutions, Anthony Fauci and so on,
link |
00:51:25.040
as some of the members of the scientific community. And the second part is the game with China
link |
00:51:35.040
of hiding the information in terms of on the fight between nations.
link |
00:51:40.160
Well, in our live streams on Dark Horse, Heather and I have been talking from the beginning about
link |
00:51:44.960
the fact that although, yes, what happens began in China, it very much looks like a failure of
link |
00:51:51.600
the international scientific community. That's frightening. But it's also hopeful in the sense
link |
00:51:58.400
that actually, if we did the right thing now, we're not navigating a puzzle about Chinese
link |
00:52:04.480
responsibility. We're navigating a question of collective responsibility for something that has
link |
00:52:11.040
been terribly costly to all of us. So that's not a very happy process. But as you point out,
link |
00:52:19.120
what's at stake is in large measure, at the very least, the strong possibility this will happen
link |
00:52:24.320
again, and that at some point, it will be far worse. So just as a person that does not learn
link |
00:52:31.200
the lessons of their own errors, doesn't get smarter and they remain in danger,
link |
00:52:37.200
we collectively, humanity has to say, well, there sure is a lot of evidence that suggests
link |
00:52:43.600
that this is a self inflicted wound. When you have done something that has caused a massive
link |
00:52:49.120
self inflicted wound, self inflicted wound, it makes sense to dwell on it exactly to the
link |
00:52:56.160
point that you have learned the lesson that makes it very, very unlikely that something similar
link |
00:53:00.720
will happen again. I think this is a good place to kind of ask you to do almost like a thought
link |
00:53:05.360
experiment, or to steel man the argument against the lab leak hypothesis. So if you were to argue,
link |
00:53:20.800
you know, you said 95% chance that it the virus leaf from a lab, there's a bunch of ways I think
link |
00:53:27.920
you can argue that even talking about it is bad for the world. So if I just put something on the
link |
00:53:39.200
table is to say that for one, it would be racism versus Chinese people that talking about that it
link |
00:53:50.240
leaked from a lab, there's a kind of immediate kind of blame and it can spiral down into this idea
link |
00:53:56.160
that's somehow the people are responsible for the virus and this kind of thing. Is it possible for
link |
00:54:02.640
you to come up with other steel man arguments against talking or against the possibility
link |
00:54:10.240
of the lab leak hypothesis? Well, so I think steel manning is a tool that is extremely
link |
00:54:18.320
valuable, but it's also possible to abuse it. I think that you can only steel man a good faith
link |
00:54:25.040
argument. And the problem is we now know that we have not been engaged in opponents who were
link |
00:54:31.280
wielding good faith arguments because privately their emails reflect their own doubts. And what
link |
00:54:36.400
they were doing publicly was actually a punishment, a public punishment for those of us who spoke up
link |
00:54:43.040
with I think the purpose of either backing us down or more likely warning others not to engage
link |
00:54:49.680
in the same kind of behavior. And obviously for people like you and me who regard science as our
link |
00:54:57.200
likely best hope for navigating difficult waters, shutting down people who are using those tools
link |
00:55:04.160
honorably is itself dishonorable. So I don't feel that it is, I don't feel that there's
link |
00:55:12.240
anything to steel man. And I also think that immediately at the point that the world suddenly
link |
00:55:19.040
with no new evidence on the table switched gears with respect to the lab leak, at the point that
link |
00:55:24.480
Nicholas Wade had published his article and suddenly the world was going to admit that this
link |
00:55:28.640
was at least a possibility if not a likelihood, we got to see something of the rationalization
link |
00:55:35.920
process that had taken place inside the institutional world. And it very definitely involved the claim
link |
00:55:41.600
that what was being avoided was the targeting of Chinese scientists. And my point would be,
link |
00:55:49.920
I don't want to see the targeting of anyone. I don't want to see racism of any kind. On the other
link |
00:55:56.240
hand, once you create license to lie in order to protect individuals when the world has a stake
link |
00:56:05.840
in knowing what happened, then it is inevitable that that process, that license to lie will be
link |
00:56:11.920
used by the thing that captures institutions for its own purposes. So my sense is it may be very
link |
00:56:18.480
unfortunate if the story of what happened here can be used against Chinese people. That would be
link |
00:56:27.200
very unfortunate. And as I think I mentioned, Heather and I have taken great pains to point out
link |
00:56:33.760
that this doesn't look like a Chinese failure. It looks like a failure of the international
link |
00:56:37.280
scientific community. So I think it is important to broadcast that message along with the analysis
link |
00:56:43.040
of the evidence. But no matter what happened, we have a right to know. And I frankly do not
link |
00:56:49.360
take the institutional layer at its word that its motivations are honorable and that it was
link |
00:56:54.160
protecting goodhearted scientists at the expense of the world. That explanation does not add up.
link |
00:57:00.240
Well, this is a very interesting question about whether it's ever okay to lie at the institutional
link |
00:57:07.200
layer to protect the populace. I think both you and I are probably on the same,
link |
00:57:18.240
have the same sense that it's a slippery slope, even if it's an effective mechanism in the short
link |
00:57:24.800
term, in the long term, it's going to be destructive. This happened with masks. This
link |
00:57:31.120
happened with other things. If you look at just history pandemics, there's an idea that panic
link |
00:57:38.960
is destructive amongst the populace. So you want to construct a narrative, whether it's a lie or
link |
00:57:45.280
not, to minimize panic. But you're suggesting that almost in all cases, and I think that was
link |
00:57:53.760
the lesson from the pandemic in the early 20th century, that lying creates distrust and distrust
link |
00:58:05.120
in the institutions is ultimately destructive. That's your sense that lying is not okay?
link |
00:58:10.800
Well, okay. There are obviously places where complete transparency is not a good idea,
link |
00:58:16.880
right, to the extent that you broadcast a technology that allows one individual to hold
link |
00:58:22.720
the world hostage. Obviously, you've got something to be navigated. But in general,
link |
00:58:29.520
I don't believe that the scientific system should be lying to us. In the case of this
link |
00:58:37.840
particular lie, the idea that the well being of Chinese scientists outweighs the well being of
link |
00:58:47.040
the world is preposterous. As you point out, one thing that rests on this question is whether we
link |
00:58:53.920
continue to do this kind of research going forward. And the scientists in question, all of them,
link |
00:58:58.800
American, Chinese, all of them were pushing the idea that the risk of a zoonotic spillover event
link |
00:59:06.080
causing a major and highly destructive pandemic was so great that we had to risk this. Now,
link |
00:59:12.240
if they themselves have caused it, and if they are wrong, as I believe they are, about the likelihood
link |
00:59:17.760
of a major world pandemic spilling out of nature in the way that they wrote into their grant
link |
00:59:23.280
applications, then the danger is, you know, the call is coming from inside the house. And
link |
00:59:29.200
we have to look at that. And yes, whatever we have to do to protect scientists from
link |
00:59:36.080
retribution, we should do. But we cannot protect them by lying to the world. And even worse,
link |
00:59:45.120
by demonizing people like me, like Josh Rogan, like Yuri Dagan, the entire drastic group on
link |
00:59:58.480
Twitter, by demonizing us for simply following the evidence is to set a terrible precedent.
link |
01:00:04.880
Right. You're demonizing people for using the scientific method to evaluate evidence that is
link |
01:00:10.000
available to us in the world. What a terrible crime it is to teach that lesson, right?
link |
01:00:15.840
Thou shalt not use scientific tools. No, I'm sorry. Whatever your license to lie is,
link |
01:00:20.640
it doesn't extend to that. Yeah, I've seen the attacks on you, the pressure on you has a very
link |
01:00:27.200
important effect on thousands of world class biologists actually at MIT, colleagues of mine,
link |
01:00:38.400
people I know, there's a slight pressure to not be allowed to one speak publicly and to actually
link |
01:00:49.200
think. Like, do you even think about these ideas? It sounds kind of ridiculous, but just to
link |
01:00:55.440
the, in the privacy of your own home, to read things, to think, it's many people, many world
link |
01:01:03.520
class biologists that I know will just avoid looking at the data. There's not even that many
link |
01:01:11.760
people that are publicly opposed and gain a function research. They're also like, it's not
link |
01:01:17.680
worth it. It's not worth the battle. And there's many people that kind of argue that those battles
link |
01:01:22.960
should be fought in private with colleagues in the privacy of the scientific community,
link |
01:01:31.200
that the public is somehow not maybe intelligent enough to be able to deal with the complexities
link |
01:01:38.160
of this kind of discussion. I don't know, but the final result is combined with the bullying of you
link |
01:01:44.080
and all the different pressures in the academic institutions is that it's just people are self
link |
01:01:50.560
censoring and silencing themselves and silencing the most important thing, which is the power of
link |
01:01:56.000
their brains. Like, these people are brilliant. And the fact that they're not utilizing their brain
link |
01:02:04.400
to come up with solutions outside of the conformist line of thinking is tragic.
link |
01:02:11.360
Well, it is. I also think that we have to look at it and understand it for what it is. For one
link |
01:02:18.080
thing, it's kind of a cryptic totalitarianism. Somehow people's sense of what they're allowed
link |
01:02:23.360
to think about, talk about, discuss is causing them to self censor. And I can tell you it's
link |
01:02:28.480
causing many of them to rationalize, which is even worse. They're blinding themselves to what
link |
01:02:32.480
they can see. But it is also the case, I believe, that what you're describing about what people
link |
01:02:39.680
said and a great many people understood that the lab leak hypothesis could not be taken off the table.
link |
01:02:46.880
But they didn't say so publicly. And I think that their discussions with each other about why they
link |
01:02:53.360
did not say what they understood, that's what captures sounds like on the inside. I don't
link |
01:02:59.360
know exactly what force captured the institutions. I don't think anybody knows for sure out here
link |
01:03:06.560
in public. I don't even know that it wasn't just simply a process, but you have these institutions.
link |
01:03:12.960
They are behaving towards a kind of somatic obligation. They have lost sight of what they
link |
01:03:20.800
were built to accomplish. And on the inside, the way they avoid going back to their original mission
link |
01:03:28.320
is to say things to themselves like, the public can't have this discussion, it can't be trusted
link |
01:03:33.360
with it. Yes, we need to be able to talk about this, but it has to be private, whatever it is
link |
01:03:37.200
they say to themselves, that is what capture sounds like on the inside. It's a institutional
link |
01:03:42.320
rationalization mechanism. And it's very, very deadly. And at the point you go from lab leak
link |
01:03:48.880
to repurposed drugs, you can see that it's very deadly in a very direct way.
link |
01:03:54.560
Yeah, I see this in my field with things like autonomous weapons systems. People in AI do
link |
01:04:02.720
not talk about the use of AI in weapons systems. They kind of avoid the idea that AI is used in
link |
01:04:08.720
the military. It's kind of funny. There's this kind of discomfort and they're like, they all
link |
01:04:13.920
hurry like something scary happens and a bunch of sheep kind of like run away. That's what it looks
link |
01:04:20.000
like. And I don't even know what to do about it. And then I feel this natural pull every time I
link |
01:04:26.880
bring up autonomous weapons systems to go along with the sheep. There's a natural kind of pull
link |
01:04:32.240
towards that direction because it's like, what can I do as one person? Now there's currently
link |
01:04:38.240
nothing destructive happening with autonomous weapons systems. So we're like in the early days
link |
01:04:43.520
of this race that in 10, 20 years might become a real problem. Now we're the discussion we're
link |
01:04:49.840
having now, we're now facing the result of that in the space of viruses, like for many years
link |
01:04:56.880
avoiding the conversations here. I don't know what to do that in the early days. But I think we
link |
01:05:04.160
have to, I guess, create institutions where people can stand out. People can stand out and like
link |
01:05:10.480
basically be individual thinkers and break out into all kinds of spaces of ideas that allow us to
link |
01:05:17.280
think freely, freedom of thought. And maybe that requires a decentralization of institutions.
link |
01:05:22.720
Well, years ago, I came up with a concept called cultivated insecurity. And the idea is,
link |
01:05:29.760
let's just take the example of the average Joe, right? The average Joe has a job somewhere and
link |
01:05:38.080
their mortgage, their medical insurance, their retirement, their connection with the economy
link |
01:05:46.240
is to one degree or another dependent on their relationship with the employer.
link |
01:05:51.600
That means that there is a strong incentive, especially in any industry where it's not easy to
link |
01:06:00.320
move from one employer to the next, there's a strong incentive to stay in your employer's good
link |
01:06:05.840
graces, right? So it creates a very top down dynamic, not only in terms of who gets to tell
link |
01:06:12.960
other people what to do, but it really comes down to who gets to tell other people how to think.
link |
01:06:17.680
So that's extremely dangerous. The way out of it is to cultivate security to the extent that
link |
01:06:26.240
somebody is in a position to go against the grain and have it not be a catastrophe for their family
link |
01:06:33.360
and their ability to earn, you will see that behavior a lot more. So I would argue that some
link |
01:06:37.600
of what you're talking about is just a simple predictable consequence of the concentration
link |
01:06:43.040
of the sources of well being and that this is a solvable problem.
link |
01:06:51.440
You got a chance to talk with Joe Rogan yesterday?
link |
01:06:55.120
Yes, I did.
link |
01:06:56.480
And I just saw the episode was released and Ivor Mekton is trending on Twitter.
link |
01:07:03.920
Joe told me it was an incredible conversation. I look forward to listening to you today. Many
link |
01:07:07.680
people have probably by the time this is released have already listened to it. I think it would be
link |
01:07:13.520
interesting to discuss a postmortem. How do you feel how the conversation went? And maybe broadly,
link |
01:07:23.600
how do you see the story as it's unfolding of Ivor Mekton from the origins from before COVID 19
link |
01:07:31.840
through 2020 to today? I very much enjoyed talking to Joe and I'm
link |
01:07:39.280
undescribably grateful that he would take the risk of such a discussion that he would,
link |
01:07:45.440
as he described it, do an emergency podcast on the subject, which I think that was not an exaggeration.
link |
01:07:51.920
This needed to happen for various reasons that he took us down the road of talking about
link |
01:07:59.680
the censorship campaign against Ivor Mekton, which I find utterly shocking and talking about
link |
01:08:06.080
the drug itself. And I should say we had Pierre Corey available. He came on the podcast as well.
link |
01:08:12.800
He is, of course, the face of the FLCCCC, the frontline COVID 19 critical care alliance. These
link |
01:08:20.560
are doctors who have innovated ways of treating COVID patients and they happened on Ivor Mekton
link |
01:08:26.160
and have been using it. And I hesitate to use the word advocating for it because that's not
link |
01:08:33.520
really the role of doctors or scientists, but they are advocating for it in the sense that there is
link |
01:08:38.640
this pressure not to talk about its effectiveness for reasons that we can go into. So maybe step
link |
01:08:45.440
back and say, what is Ivor Mekton and how much studies have been done to show its effectiveness?
link |
01:08:53.120
So Ivor Mekton is an interesting drug. It was discovered in the 70s by a Japanese
link |
01:09:00.320
scientist named Satoshi Omura. And he found it in soil near a Japanese golf course. So I would
link |
01:09:08.720
just point out in passing that if we were to stop self silencing over the possibility that
link |
01:09:15.520
Asians will be demonized over the possible lab leak in Wuhan and to recognize that actually the
link |
01:09:21.760
natural course of the story has a likely lab leak in China. It has a unlikely hero in Japan.
link |
01:09:32.960
The story is naturally not a simple one. But in any case, Omura discovered this molecule. He
link |
01:09:42.480
sent it to a friend who was at Merck, a scientist named Campbell. They won a Nobel Prize for the
link |
01:09:50.080
discovery of the Ivor Mekton molecule in 2015. Its initial use was in treating parasitic infections.
link |
01:09:58.640
It's very effective in treating the worm that causes river blindness, the pathogen that causes
link |
01:10:06.400
elephantitis, scabies, the very effective anti parasite drug. It's extremely safe. It's on the
link |
01:10:12.480
who's list of essential medications. It's safe for children. It has been administered something
link |
01:10:18.240
like four billion times in the last four decades. It has been given away in the millions of doses
link |
01:10:25.600
by Merck in Africa. People have been on it for long periods of time. And in fact,
link |
01:10:31.280
one of the reasons that Africa may have had less severe impacts from COVID 19 is that Ivor Mekton
link |
01:10:37.840
is widely used there to prevent parasites. And the drug appears to have a long lasting impact.
link |
01:10:43.200
So it's an interesting molecule. It was discovered some time ago, apparently,
link |
01:10:49.360
that it has antiviral properties. And so it was tested early in the COVID 19 pandemic to see if
link |
01:10:55.120
it might work to treat humans with COVID. It turned out to have very promising evidence that it did
link |
01:11:02.640
treat humans. It was tested in tissues. It was tested at a very high dosage, which confuses people.
link |
01:11:08.800
They think that those of us who believe that Ivor Mekton might be useful in confronting this
link |
01:11:14.320
disease are advocating those high doses, which is not the case. But in any case, there have been
link |
01:11:19.840
quite a number of studies. A wonderful meta analysis was finally released. We had seen it in
link |
01:11:25.760
preprint version, but it was finally peer reviewed and published this last week. It reveals that the
link |
01:11:33.040
drug, as clinicians have been telling us, those who've been using it, it's highly effective at
link |
01:11:37.280
treating people with the disease, especially if you get to them early. And it showed an 86%
link |
01:11:42.640
effectiveness as a prophylactic to prevent people from contracting COVID. And that number, 86%,
link |
01:11:49.680
is high enough to drive SARS CoV2 to extinction if we wished to deploy it.
link |
01:11:56.160
First of all, the meta analysis, is this the Ivor Mekton for COVID 19 real time meta analysis
link |
01:12:03.760
of 60 studies? Or there's a bunch of meta analysis there, because I was really impressed by the
link |
01:12:08.640
real time meta analysis that keeps getting updated. I don't know if it's the same kind.
link |
01:12:13.200
The one at IVMMeta.com? Well, I saw it. It's C19IvorMekton.com.
link |
01:12:21.760
No, this is not that meta analysis. So that is, as you say, a living meta analysis where you can
link |
01:12:26.400
watch as evidence. Which is super cool, by the way. It's really cool. And they've got some
link |
01:12:30.320
really nice graphics that allow you to understand, well, what is the evidence? It's concentrated
link |
01:12:35.760
around this level of effectiveness, et cetera. So anyway, it's great site, well worth paying
link |
01:12:39.920
attention to. No, this is a meta analysis. I don't know any of the authors, but one.
link |
01:12:46.320
Second author is Tess Laurie of the Bird Group. Bird being a group of analysts and doctors in
link |
01:12:54.880
Britain that is playing a role similar to the FLCCC here in the US. So anyway, this is a meta
link |
01:13:01.440
analysis that Tess Laurie and others did of all of the available evidence. And it's quite compelling.
link |
01:13:10.560
People, if people can look for it on my Twitter, I will put it up and people can find it there.
link |
01:13:15.040
So what about dose here? In terms of safety, what do we understand about the kind of dose
link |
01:13:23.200
required to have that level of effectiveness? And what do we understand about the safety of that
link |
01:13:29.200
kind of dose? So let me just say I'm not a medical doctor. I'm a biologist. I'm on ivermectin in
link |
01:13:36.880
lieu of vaccination. In terms of dosage, there is one reason for concern, which is that the most
link |
01:13:43.280
effective dose for prophylaxis involves something like weekly administration. And that that is because
link |
01:13:51.040
that is not a historical pattern of use for the drug. It is possible that there is some
link |
01:13:56.800
long term implication of being on it weekly for a long period of time. There's not a strong
link |
01:14:03.040
indication of that, the safety signal that we have over people using the drug over many years
link |
01:14:08.560
and using it in high doses. In fact, Dr. Corey told me yesterday that there are cases in which
link |
01:14:14.880
people have made calculation errors and taken a massive overdose of the drug and had no ill
link |
01:14:20.480
effect. So anyway, there's lots of reasons to think the drug is comparatively safe, but no drug is
link |
01:14:25.360
perfectly safe. And I do worry about the long term implications of taking it. I also think it's very
link |
01:14:31.840
likely because the drug is administered in a dose something like let's say 15 milligrams for somebody
link |
01:14:42.960
my size once a week after you've gone through the initial double dose that you take 48 hours
link |
01:14:49.680
apart. It is apparent that if the amount of drug in your system is sufficient to be protective at
link |
01:14:57.120
the end of the week, then it was probably far too high at the beginning of the week. So there's a
link |
01:15:01.600
question about whether or not you could flatten out the intake so that the amount of ivermectin
link |
01:15:08.240
goes down, but the protection remains. I have little doubt that that would be discovered if we looked
link |
01:15:13.440
for it. But that said, it does seem to be quite safe, highly effective at preventing COVID. The
link |
01:15:21.200
86% number is plenty high enough for us to drive SARS CoV2 to extinction in light of its R not
link |
01:15:28.800
number of slightly more than two. And so why we are not using it as a bit of a mystery.
link |
01:15:36.560
So even if everything you said now turns out to be not correct, it is nevertheless obvious that it's
link |
01:15:45.360
sufficiently promising. It always has been in order to merit rigorous scientific exploration,
link |
01:15:52.400
investigation, doing a lot of studies, and certainly not censoring the science or the
link |
01:15:58.640
discussion of it. So before we talk about the various vaccines for COVID 19, I'd like to talk
link |
01:16:06.560
to you about censorship. Given everything you're saying, why did YouTube and other places censor
link |
01:16:16.240
discussion of ivermectin? Well, there's a question about why they say they did it,
link |
01:16:23.920
and there's a question about why they actually did it. Now, it is worth mentioning that YouTube
link |
01:16:30.400
is part of a consortium. It is partnered with Twitter, Facebook, Reuters, AP, Financial Times,
link |
01:16:40.240
Washington Post, some other notable organizations, and that this group has appointed itself
link |
01:16:48.560
the arbiter of truth. In effect, they have decided to control discussion ostensibly to
link |
01:16:57.440
prevent the distribution of misinformation. Now, how have they chosen to do that? In this case,
link |
01:17:02.960
they have chosen to simply utilize the recommendations of the who and the CDC and apply
link |
01:17:10.400
them as if they are synonymous with scientific truth. Problem, even at their best, the who and
link |
01:17:17.920
CDC are not scientific entities. They are entities that are about public health. Public health has
link |
01:17:25.440
this, whether it's right or not, and I believe I disagree with it, but it has this
link |
01:17:33.040
self assigned right to lie that comes from the fact that there is game theory that works against,
link |
01:17:41.040
for example, a successful vaccination campaign, that if everybody else takes a vaccine and,
link |
01:17:48.160
therefore, the herd becomes immune through vaccination and you decide not to take a vaccine,
link |
01:17:53.760
then you benefit from the immunity of the herd without having taken the risk. So,
link |
01:17:59.040
people who do best are the people who opt out. That's a hazard, and the who and CDC as public
link |
01:18:04.800
health entities effectively oversimplify stories in order that that game theory does not cause
link |
01:18:12.640
a predictable tragedy of the commons. With that said, once that right to lie exists,
link |
01:18:19.120
then it finds out, it turns out to serve the interests of, for example, pharmaceutical
link |
01:18:24.160
companies which have emergency use authorizations that require that they're not be a safe and
link |
01:18:28.800
effective treatment and have immunity from liability for harms caused by their product.
link |
01:18:34.640
So, that's a recipe for disaster, right? You don't need to be a sophisticated
link |
01:18:40.400
thinker about complex systems to see the hazard of immunizing a company from the harm of its own
link |
01:18:47.600
product at the same time that that product can only exist in the market if some other product
link |
01:18:53.360
that works better somehow fails to be noticed. So, somehow YouTube is doing the bidding of Merck
link |
01:19:00.560
and others. Whether it knows that that's what it's doing, I have no idea. I think this may
link |
01:19:05.760
be another case of an autopilot that thinks it's doing the right thing because it's parroting the
link |
01:19:12.000
corrupt wisdom of the who and the CDC, but the who and the CDC have been wrong again and again
link |
01:19:16.880
in this pandemic. And the irony here is that with YouTube coming after me, well, my channel has been
link |
01:19:23.920
right where the who and CDC have been wrong consistently over the whole pandemic. So, how is
link |
01:19:30.560
it that YouTube is censoring us because the who and CDC disagree with us when in fact in past
link |
01:19:36.240
disagreements we've been right and they've been wrong? There's so much to talk about here. So,
link |
01:19:44.560
I've heard this many times actually on the inside of YouTube and with colleagues that
link |
01:19:49.840
I've talked with is they kind of in a very casual way say their job is simply to slow or prevent
link |
01:20:00.080
the spread of misinformation. And they say like that's an easy thing to do. Like to know what is
link |
01:20:08.080
true or not is an easy thing to do. And so, from the YouTube perspective, I think they basically
link |
01:20:17.120
outsource of the task of knowing what is true or not to public institutions that on a basic
link |
01:20:27.200
Google search claim to be the arbiters of truth. So, if you were YouTube who are exceptionally
link |
01:20:37.440
profitable and exceptionally powerful in terms of controlling what people get to see or not,
link |
01:20:44.720
what would you do? Would you take a stand, a public stand against the WHO who CDC?
link |
01:20:53.920
Or would you instead say, you know what, let's open the dam and let any video on anything fly?
link |
01:21:02.880
What do you do here? If you say you were put, if Brett Weinstein was put in charge of YouTube for
link |
01:21:09.360
a month in this most critical of times or YouTube actually has incredible amounts of power to educate
link |
01:21:16.640
the populace, to give power of knowledge to the populace such that they can reform institutions,
link |
01:21:24.160
what would you do? How would you run YouTube? Well, unfortunately, or fortunately, this is
link |
01:21:29.760
actually quite simple. The founders, the American founders settled on a counterintuitive formulation
link |
01:21:37.040
that people should be free to say anything. They should be free from the government blocking them
link |
01:21:43.840
from doing so. They did not imagine that in formulating that right that most of what was said
link |
01:21:49.920
would be of high quality, nor did they imagine it would be free of harmful things. What they
link |
01:21:54.800
correctly reasoned was that the benefit of leaving everything so it can be said exceeds the cost,
link |
01:22:02.400
which everyone understands to be substantial. What I would say is they could not have anticipated
link |
01:22:09.520
the impact, the centrality of platforms like YouTube, Facebook, Twitter, etc. If they had,
link |
01:22:18.160
they would not have limited the First Amendment as they did. They clearly understood that the
link |
01:22:23.600
power of the federal government was so great that it needed to be limited by granting explicitly the
link |
01:22:32.080
right of citizens to say anything. In fact, YouTube, Twitter, Facebook may be more powerful
link |
01:22:39.840
in this moment than the federal government of their worst nightmares could have been.
link |
01:22:44.000
The power that these entities have to control thought and to shift civilization is so great
link |
01:22:50.080
that we need to have those same protections. It doesn't mean that harmful things won't be said,
link |
01:22:54.320
but it means that nothing has changed about the cost benefit analysis of building the right to
link |
01:23:00.160
censor. If I were running YouTube, the limit of what should be allowed is the limit of the law.
link |
01:23:08.240
If what you are doing is legal, then it should not be YouTube's place to limit
link |
01:23:13.120
what gets said or who gets to hear it. That is between speakers and audience. Will harm come
link |
01:23:18.640
from that? Of course it will. Will net harm come from it? No, I don't believe it will. I believe
link |
01:23:24.640
that allowing everything to be said does allow a process in which better ideas do come to the
link |
01:23:29.680
fore and win out. You believe that in the end, when there's complete freedom to share ideas,
link |
01:23:37.520
that truth will win out? What I've noticed, just as a brief side comment,
link |
01:23:44.000
that certain things become viral, regardless of their truth. I've noticed that things that are
link |
01:23:52.800
dramatic or funny, things that become memes don't have to be grounded in truth. What worries me
link |
01:24:01.920
there is that we basically maximize for drama versus maximize for truth in a system where
link |
01:24:10.480
everything is free. That's worrying in the time of emergency. Well, yes, it's all worrying in
link |
01:24:17.840
time of emergency, to be sure. But I want you to notice that what you've happened on is actually an
link |
01:24:22.800
analog for a much deeper and older problem. We are not a blank slate, but we are the blankest
link |
01:24:32.160
slate that nature has ever devised, and there's a reason for that. It's where our flexibility comes
link |
01:24:37.280
from. We have, effectively, we are robots in which a large fraction of the cognitive capacity has
link |
01:24:47.200
been or of the behavioral capacity has been offloaded to the software layer, which gets
link |
01:24:52.960
written and rewritten over evolutionary time. That means, effectively, that much of what we are,
link |
01:25:00.800
in fact, the important part of what we are, is housed in the cultural layer and the conscious
link |
01:25:05.840
layer and not in the hardware hard coding layer. That layer is prone to make errors, right? And
link |
01:25:14.320
anybody who's watched a child grow up knows that children make absurd errors all the time, right?
link |
01:25:20.480
That's part of the process, as we were discussing earlier. It is also true that as you look across
link |
01:25:26.480
a field of people discussing things, a lot of what is said is pure nonsense. It's garbage.
link |
01:25:32.960
But the tendency of garbage to emerge and even to spread on the short term does not say that over
link |
01:25:40.640
the long term, what sticks is not the valuable ideas. There is a high tendency for novelty to
link |
01:25:49.760
be created in the cultural space, but there's also a high tendency for it to go extinct.
link |
01:25:54.160
You have to keep that in mind. It's not like the genome, right? Everything is happening at a much
link |
01:25:58.560
higher rate. Things are being created. They're being destroyed. I can't say that, obviously,
link |
01:26:04.480
we've seen totalitarianism arise many times and it's very destructive each time it does. So,
link |
01:26:10.640
it's not like, hey, freedom to come up with any idea you want hasn't produced a whole lot of
link |
01:26:15.920
carnage, but the question is over time, does it produce more open, fairer, more decent societies?
link |
01:26:22.960
And I believe that it does. I can't prove it, but that does seem to be the pattern.
link |
01:26:27.600
I believe so as well. The thing is, in the short term, freedom of speech, absolute freedom of
link |
01:26:35.920
speech can be quite destructive, but you nevertheless have to hold on to that, because in the long
link |
01:26:43.360
term, I think you and I, I guess, are optimistic in the sense that good ideas will win out.
link |
01:26:50.160
I don't know how strongly I believe that it will work, but I will say I haven't heard a better idea.
link |
01:26:58.160
Yeah. I would also point out that there's something very significant in this question
link |
01:27:04.160
of the hubris involved in imagining that you're going to improve the discussion by censoring,
link |
01:27:09.520
which is the majority of concepts at the fringe are nonsense. That's automatic. But the heterodoxy
link |
01:27:22.160
at the fringe, which is indistinguishable at the beginning from the nonsense ideas,
link |
01:27:27.920
is the key to progress. So, if you decide, hey, the fringe is 99% garbage, let's just get rid
link |
01:27:34.720
of it, right? Hey, that's a strong win. We're getting rid of 99% garbage for 1% something or
link |
01:27:39.920
other. And the point is, yeah, but that 1% something or other is the key. You're throwing
link |
01:27:44.160
out the key. And so that's what YouTube is doing. Frankly, I think at the point that it started
link |
01:27:50.320
censoring my channel, you know, in the immediate aftermath of this major reversal of lab or for
link |
01:27:56.000
lab leak, it should have looked at itself and said, well, what the hell are we doing? Who are we
link |
01:27:59.760
censoring? We're censoring somebody who was just right, right, in a conflict with the very same
link |
01:28:04.720
people on whose behalf we are now censoring, right? That should have caused them to wake up.
link |
01:28:09.200
So, you said one approach, if you're on YouTube, is this basically let all videos go that do not
link |
01:28:15.840
violate the law? Well, I should fix that. Okay. I believe that that is the basic principle.
link |
01:28:20.560
Eric makes an excellent point about the distinction between ideas and personal attacks,
link |
01:28:26.640
doxing, these other things. So, I agree there's no value in allowing people to destroy each other's
link |
01:28:32.480
lives, even if there's a technical legal defense for it. Now, how you draw that line, I don't know.
link |
01:28:39.040
But, you know, what I'm talking about is, yes, people should be free to traffic in bad ideas,
link |
01:28:43.920
and they should be free to expose that the ideas are bad. And hopefully that process results in
link |
01:28:49.280
better ideas winning out. Yeah, there's an interesting line between, you know,
link |
01:28:53.440
like, ideas like the Earth is flat, which I believe you should not censor. And then,
link |
01:29:00.240
like, you start to encroach on, like, personal attacks. So, not, you know, doxing, yes, but,
link |
01:29:06.560
like, not even getting to that. Like, there's a certain point where it's like, that's no longer
link |
01:29:11.440
ideas, that's more, that's somehow not productive, even if it's, it feels like believing the Earth
link |
01:29:20.240
is flat, it's somehow productive. Because maybe there's a tiny percent chance it is. You know,
link |
01:29:26.640
like, it just feels like personal attacks, it doesn't. Well, you know, it's, I'm torn on this
link |
01:29:33.360
because there's assholes in this world, there's fraudulent people in this world. So, sometimes
link |
01:29:38.240
personal attacks are useful to reveal that. But there's a line you can cross. Like, there's a
link |
01:29:45.360
comedy where people make fun of others. I think that's amazing, that's very powerful,
link |
01:29:50.640
and that's very useful, even if it's painful. But then there's like, once it gets to be,
link |
01:29:57.200
yeah, there's a certain line, it's a gray area where you cross where it's no longer
link |
01:30:01.760
in any possible world productive. And that's a really weird gray area for YouTube to operate in.
link |
01:30:09.120
And it feels like it should be a crowdsourced thing where people vote on it. But then again,
link |
01:30:14.400
do you trust the majority to vote on what is crossing the line or not? I mean, this is where,
link |
01:30:20.880
this is really interesting on this particular, like the scientific aspect of this. Do you think
link |
01:30:27.600
YouTube should take more of a stance? Not censoring, but to actually have scientists within YouTube,
link |
01:30:36.880
having these kinds of discussions, and then be able to almost speak out in a transparent way.
link |
01:30:42.000
This is what we're going to let this video stand. But here's all these other opinions,
link |
01:30:47.440
almost like take a more active role in its recommendation system in trying to present
link |
01:30:53.280
a full picture to you. Right now, they're not there. The recommender systems are not human
link |
01:30:59.600
fine tuned. They're all based on how you click. And there's this clustering algorithms. They're
link |
01:31:05.600
not taking an active role on giving you the full spectrum of ideas in the space of science.
link |
01:31:10.880
They just censor or not. Well, at the moment, it's going to be pretty hard to compel me that
link |
01:31:17.280
these people should be trusted with any sort of curation or comment on matters of evidence because
link |
01:31:24.880
they have demonstrated that they are incapable of doing it well. You could make such an argument.
link |
01:31:30.720
And I guess I'm open to the idea of institutions that would look something like YouTube that would
link |
01:31:36.480
be capable of offering something valuable. And even just the fact of them literally curating
link |
01:31:42.160
things and putting some videos next to others implies something. So yeah, there's a question
link |
01:31:48.800
to be answered. But at the moment, no. At the moment, what it is doing is quite literally putting
link |
01:31:55.040
not only individual humans in tremendous jeopardy by censoring discussion of useful tools and making
link |
01:32:02.240
tools that are more hazardous than has been acknowledged seem safe. But it is also placing
link |
01:32:08.960
humanity in danger of a permanent relationship with this pathogen. I cannot emphasize enough
link |
01:32:15.360
how expensive that is. It's effectively incalculable if the relationship becomes permanent,
link |
01:32:20.320
the number of people who will ultimately suffer and die from it is indefinitely large.
link |
01:32:25.280
Yeah, there's currently the algorithm is very rabbit hole driven, meaning if you click on
link |
01:32:33.600
flat earth videos, that's all you're going to be presented with. And you're not going to be
link |
01:32:40.000
nicely presented with arguments against the flat earth. And the flip side of that,
link |
01:32:46.560
if you watch like quantum mechanics videos or no, general relativity videos,
link |
01:32:50.640
it's very rare you're going to get in a recommendation, have you considered the earth
link |
01:32:54.000
as flat? And I think you should have both. Same with vaccine, videos that present the power and
link |
01:33:00.880
incredible like biology, genetics, biology about the vaccine, you're rarely going to get videos
link |
01:33:09.840
from well respected scientific minds presenting possible dangers of the vaccine. And the vice
link |
01:33:17.200
of versus true as well, which is if you're looking at the dangers of the vaccine on YouTube,
link |
01:33:22.560
you're not going to get the highest quality of videos recommended to you. And I'm not talking
link |
01:33:27.920
about like manually inserted CDC videos that are like the most untrustworthy things you can possibly
link |
01:33:34.320
watch about how everybody should take the vaccine. It's the safest thing ever. No, it's about
link |
01:33:39.200
incredible. Again, MIT colleagues of mine, incredible biologists, virologists that I'll
link |
01:33:44.400
talk about the details of how the mRNA vaccines work and all those kinds of things. I think
link |
01:33:50.880
maybe this is me with the AI head on is I think the algorithm can fix a lot of this. And YouTube
link |
01:33:58.640
should build better algorithms and trust that to a couple of complete freedom of speech to expand
link |
01:34:08.080
the what people are able to think about, present always varied views, not balance in some artificial
link |
01:34:14.240
way, hard coded way, but balance in a way that's crowdsourced. I think that's an algorithm problem
link |
01:34:20.080
that can be solved because then you can delegate it to the algorithm as opposed to this hard code
link |
01:34:28.480
censorship of basically creating artificial boundaries on what can and can't be discussed,
link |
01:34:36.080
instead creating a full spectrum of exploration that can be done and trusting the intelligence of
link |
01:34:42.400
people to do the exploration. Well, there's a lot there, I would say we have to keep in mind
link |
01:34:49.120
that we're talking about a publicly held company with shareholders and obligations to them and
link |
01:34:55.920
that that may make it impossible. And I remember many years ago back in the early days of Google,
link |
01:35:03.680
I remember a sense of terror at the loss of general search. It used to be that Google,
link |
01:35:12.880
if you searched, came up with the same thing for everyone and then it got personalized and for a
link |
01:35:18.800
while it was possible to turn off the personalization, which was still not great because if everybody
link |
01:35:23.440
else is looking at a personalized search and you can tune into one that isn't personalized,
link |
01:35:29.600
that doesn't tell you why the world is sounding the way it is. But nonetheless,
link |
01:35:33.520
it was at least an option and then that vanished. And the problem is, I think this is literally
link |
01:35:38.400
deranging us that in effect, what you're describing is unthinkable. It is unthinkable
link |
01:35:45.600
that in the face of a campaign to vaccinate people in order to reach herd immunity that
link |
01:35:53.680
YouTube would give you videos on hazards of vaccines when this is how hazardous the vaccines are
link |
01:36:02.000
is an unsettled question. Why is it unthinkable? That doesn't make any sense from a company
link |
01:36:08.080
perspective. If intelligent people in large amounts are open minded and are thinking through the
link |
01:36:18.320
hazards and the benefits of a vaccine, a company should find the best videos to present what
link |
01:36:27.360
people are thinking about. Well, let's come up with a hypothetical. Let's come up with a
link |
01:36:31.680
very deadly disease for which there's a vaccine that is very safe, though not perfectly safe.
link |
01:36:40.000
And we are then faced with YouTube trying to figure out what to do for somebody searching
link |
01:36:45.200
on vaccine safety. Suppose it is necessary in order to drive the pathogen to extinction,
link |
01:36:51.360
something like smallpox, that people get on board with the vaccine.
link |
01:36:54.960
But there's a tiny fringe of people who thinks that the vaccine is a mind control agent.
link |
01:37:05.360
So should YouTube direct people to the only claims against this vaccine,
link |
01:37:12.400
which is that it's a mind control agent, when in fact the vaccine is very safe,
link |
01:37:20.880
whatever that means. If that were the actual configuration of the puzzle, then YouTube would
link |
01:37:26.640
be doing active harm pointing you to this other video potentially. Now, yes, I would love to
link |
01:37:34.320
live in a world where people are up to the challenge of sorting that out. But my basic
link |
01:37:40.080
point would be, if it's an evidentiary question, and there is essentially no evidence that the
link |
01:37:46.320
vaccine is a mind control agent and there's plenty of evidence that the vaccine is safe,
link |
01:37:50.640
then, well, you look for this video, we're going to give you this one puts it on a par, right?
link |
01:37:55.280
So for the mind that's tracking how much thought is there behind it's safe versus how much thought
link |
01:38:01.680
is there behind it's a mind control agent will result in artificially elevating this. Now,
link |
01:38:07.840
in the current case, what we've seen is not this at all. We have seen evidence obscured in order
link |
01:38:14.640
to create a false story about safety. And we saw the inverse with ivermectin. We saw a campaign
link |
01:38:23.520
to portray the drug as more dangerous and less effective than the evidence clearly suggested
link |
01:38:30.000
it was. So we're not talking about a comparable thing. But I guess my point is the algorithmic
link |
01:38:35.520
solution that you point to creates a problem of its own, which is that it means that the way to get
link |
01:38:42.000
exposure is to generate something fringy. If you're the only thing on some fringe,
link |
01:38:46.720
then suddenly YouTube would be recommending those things. And that's obviously a gameable system at
link |
01:38:52.640
best. Yeah, but the solution to that, I know you're creating a thought experiment, maybe playing a
link |
01:38:58.160
little bit of a devil's advocate. I think the solution to that is not to limit the algorithm
link |
01:39:03.840
in the case of the super deadly virus. It's for the scientists to step up and become better
link |
01:39:08.880
communicators, more charismatic, is fight the battle of ideas, sort of create better videos.
link |
01:39:16.480
Like if the virus is truly deadly, you have a lot more ammunition, a lot more data,
link |
01:39:22.000
a lot more material to work with in terms of communicating with the public.
link |
01:39:26.560
So be better at communicating and stop being, you have to start trusting the intelligence of
link |
01:39:33.200
people and also being transparent and playing the game of the internet, which is like,
link |
01:39:37.600
what is the internet hungry for? I believe authenticity. Stop looking like you're full of
link |
01:39:44.080
shit. It's the scientific community. If there's any flaw that I currently see,
link |
01:39:50.400
especially the people that are in public office that like Anthony Fauci, they look like they're
link |
01:39:55.120
full of shit. And I know they're brilliant. Why don't they look more authentic? So they're losing
link |
01:40:00.800
that game. And I think a lot of people observing this entire system now, younger scientists are
link |
01:40:06.560
seeing this and saying, okay, if I want to continue being a scientist in the public eye,
link |
01:40:13.840
and I want to be effective on my job, I'm going to have to be a lot more authentic.
link |
01:40:18.000
So they're learning the lesson. This evolutionary system is working.
link |
01:40:22.400
So there's just a younger generation of minds coming up that I think will do a much better job
link |
01:40:27.040
in this battle of ideas that when the much more dangerous virus comes along,
link |
01:40:32.720
they'll be able to be better communicators. At least that's the hope. The using the algorithm
link |
01:40:37.920
to control that is, I feel like is a big problem. So you're going to have the same problem with a
link |
01:40:43.760
deadly virus as with the current virus. If you let YouTube draw hard lines by the PR and the
link |
01:40:51.600
marketing people versus the broad community of scientists. Well, in some sense, you're
link |
01:40:57.360
suggesting something that's close kin to what I was saying about freedom of expression ultimately
link |
01:41:05.600
provides an advantage to better ideas. So I'm in agreement, broadly speaking. But I would also
link |
01:41:11.440
say there's probably some sort of, let's imagine the world that you propose where YouTube shows
link |
01:41:16.720
you the alternative point of view. That has the problem that I suggest. But one thing you could
link |
01:41:22.560
do is you could give us the tools to understand what we're looking at. You could give us, so first
link |
01:41:28.640
of all, there's something, I think, myopic, solipsistic, narcissistic about an algorithm
link |
01:41:37.120
that serves shareholders by showing you what you want to see rather than what you need to know.
link |
01:41:42.640
That's the distinction is flattering you, playing to your blind spot is something that
link |
01:41:48.240
algorithm will figure out, but it's not healthy for us all to have Google playing to our blind spot.
link |
01:41:53.360
It's very, very dangerous. So what I really want is analytics that allow me or maybe options and
link |
01:42:01.520
analytics, the options should allow me to see what alternative perspectives are being explored.
link |
01:42:09.040
So here's the thing I'm searching and it leads me down this road. Let's say it's ivermectin.
link |
01:42:13.520
I find all of this evidence that ivermectin works. I find all of these discussions and people
link |
01:42:17.760
talk about various protocols and this and that. And then I could say, all right, what is the other
link |
01:42:23.600
side? And I could see who is searching not as individuals, but what demographics are searching
link |
01:42:30.960
alternatives. And maybe you could even combine it with something Reddit like where effectively,
link |
01:42:37.120
let's say that there was a position that, I don't know, that a vaccine is a mind control device
link |
01:42:44.080
and you could have a steelman this argument competition effectively. And the better answers
link |
01:42:50.400
that steelman and as well as possible would rise to the top. And so you could read the top three
link |
01:42:54.960
or four explanations about why this really credibly is a mind control product. And you can say, well,
link |
01:43:02.240
that doesn't really add up. I can check these three things myself and they can't possibly be
link |
01:43:06.000
right. Right. And you could dismiss it. And then as an argument that was credible, let's say
link |
01:43:10.320
plate tectonics before that was an accepted concept, you'd say, wait a minute, there is evidence
link |
01:43:18.320
for plate tectonics. Crazy as it sounds that the continents are floating around on liquid.
link |
01:43:23.360
Actually, that's not so implausible. You know, we've got these subduction zones. We've got a geology
link |
01:43:29.280
that is compatible. We've got puzzle piece continents that seem to fit together. Wow,
link |
01:43:33.360
that's a surprising amount of evidence for that position. So I'm going to file some Bayesian
link |
01:43:38.080
probability with it that's updated for the fact that actually the steelman argument is better
link |
01:43:41.760
than I was expecting. Right. So I could imagine something like that where A, I would love the
link |
01:43:46.720
search to be indifferent to who's searching. Right. The solipsistic thing is too dangerous.
link |
01:43:51.840
So the search could be general. So we would all get a sense for what everybody else was seeing to.
link |
01:43:56.880
And then some layer that didn't have anything to do with what YouTube points you to or not,
link |
01:44:01.760
but allowed you to see the general pattern of adherence to searching for information and,
link |
01:44:12.080
again, a layer in which those things could be defended so you could hear what a good argument
link |
01:44:15.920
sounded like rather than just hear a caricatured argument. Yeah. And also reward people, creators
link |
01:44:22.480
that have demonstrated like a track record of open mindedness and correctness as much as it
link |
01:44:28.800
could be measured over a long term and sort of, I mean, a lot of this maps to incentivizing
link |
01:44:40.400
good long term behavior, not immediate kind of dopamine rush kind of signals.
link |
01:44:48.480
I think ultimately the algorithm on the individual level should optimize for personal growth,
link |
01:45:00.160
long term happiness, just growth intellectually growth in terms of lifestyle personally and so
link |
01:45:07.280
on as opposed to immediate. I think that's going to build a better site, not even just like truth,
link |
01:45:13.360
because I think truth is a complicated thing. It's more just you growing as a person, exploring
link |
01:45:19.760
in the space of ideas, changing your mind often, increasing the level to which you're open minded,
link |
01:45:25.200
the knowledge base you're operating from, the willingness to empathize with others,
link |
01:45:31.280
all those kinds of things the algorithm should optimize for that creating a better human at
link |
01:45:35.440
the individual level that you're all I think that's a great business model, because the person
link |
01:45:41.200
that's using this tool will then be happier with themselves for having used it, and it'll be a lifelong
link |
01:45:49.040
quote unquote customer. I think it's a great business model to make a happy, open minded,
link |
01:45:55.840
knowledgeable, better human being. It's a terrible business model under the current system.
link |
01:46:02.240
What you want is to build the system in which it is a great business model.
link |
01:46:05.520
Why is a terrible model? Because it will be decimated by those who play to the short term.
link |
01:46:12.800
I don't think so. I mean, I think we're living it. We're living it.
link |
01:46:17.200
Well, no, because if you have the alternative that presents itself,
link |
01:46:21.520
it points out the emperor has no clothes. It points out that YouTube is operating in this way,
link |
01:46:27.200
Twitter is operating in this way, Facebook is operating in this way.
link |
01:46:30.640
How long term would you like the wisdom to prove that? Even though a week is better
link |
01:46:38.560
when it's currently happening. Right. But the problem is, if a week loses out to an hour, right?
link |
01:46:46.640
I don't think it loses out. It loses out in the short term.
link |
01:46:49.520
That's my point. At least you're a great communicator and you basically say,
link |
01:46:54.000
look, here's the metrics. A lot of it is how people actually feel.
link |
01:46:58.160
This is what people experience with social media. They look back at the previous month and say,
link |
01:47:06.160
I felt shitty in a lot of days because of social media. If you look back at the previous few weeks
link |
01:47:13.440
and say, wow, I'm a better person because of that month happened, they immediately choose
link |
01:47:20.160
the product that's going to lead to that. That's what love for products looks like.
link |
01:47:24.400
If you love, a lot of people love their Tesla car or iPhone or beautiful design,
link |
01:47:31.760
that's what love looks like. You look back, I'm a better person for having used this thing.
link |
01:47:36.400
Well, you got to ask yourself the question though, if this is such a great business model,
link |
01:47:40.240
why isn't it evolving? Why don't we see it? Honestly, it's competence.
link |
01:47:45.600
It's like people are just, it's not easy to build new, it's not easy to build products, tools,
link |
01:47:54.560
systems on new ideas. It's kind of a new idea. We've gone through this,
link |
01:48:00.640
everything we're seeing now comes from the ideas of the initial birth of the internet.
link |
01:48:06.160
There just needs to be new sets of tools that are incentivizing long term personal growth
link |
01:48:12.400
and happiness. That's it. Right. But what we have is a market that doesn't favor this.
link |
01:48:18.480
For one thing, we had an alternative to Facebook that looked, you owned your own data,
link |
01:48:25.760
it wasn't exploitative and Facebook bought a huge interest in it and it died. Who do you know
link |
01:48:33.200
who's on diaspora? The execution there was not good. Right. But it could have gotten better.
link |
01:48:38.560
Right. I don't think that the argument that why hasn't somebody done it,
link |
01:48:44.640
a good argument for it's not going to completely destroy all of Twitter and Facebook when somebody
link |
01:48:49.840
does it or Twitter will catch up and pivot to the algorithm. This is not what I'm saying.
link |
01:48:56.160
There's obviously great ideas that remain unexplored because nobody has gotten to the
link |
01:49:01.120
foothill that would allow you to explore them. That's true. But an internet that was non predatory
link |
01:49:06.800
is an obvious idea and many of us know that we want it and many of us have seen prototypes
link |
01:49:12.880
of it and we don't move because there's no audience there. So the network effects
link |
01:49:16.640
cause you to stay with the predatory internet. But let me just, I wasn't kidding about build
link |
01:49:23.840
the system in which your idea is a great business plan. So in our upcoming book,
link |
01:49:30.960
Heather and I in our last chapter explore something called the fourth frontier and fourth
link |
01:49:35.040
frontier has to do with sort of a 2.0 version of civilization, which we freely admit we can't
link |
01:49:41.120
tell you very much about. It's something that would have to be, we would have to prototype our
link |
01:49:45.040
way there. We would have to effectively navigate our way there. But the result would be very much
link |
01:49:49.600
like what you're describing. It would be something that effectively liberates humans
link |
01:49:54.720
meaningfully and most importantly, it has to feel like growth without depending on growth.
link |
01:50:01.440
In other words, human beings are creatures that like every other creature is effectively looking
link |
01:50:07.600
for growth, right? We are looking for underexploited or unexploited opportunities. And when we find
link |
01:50:13.200
them, our ancestors, for example, if they happen into a new valley that was unexplored by people,
link |
01:50:19.920
their population would grow until it had carrying capacity. So there would be this great feeling
link |
01:50:23.760
of there's abundance until you hit carrying capacity, which is inevitable. And then zero
link |
01:50:27.840
some dynamics would set in. So in order for human beings to flourish long term, the way to get there
link |
01:50:34.880
is to satisfy the desire for growth without hooking it to actual growth, which only moves and
link |
01:50:40.880
fits and starts. And this is actually, I believe the key to avoiding these spasms of human tragedy
link |
01:50:48.800
when in the absence of growth, people do something that causes their population to experience
link |
01:50:54.640
growth, which is they go and make war on or commit genocide against some other population,
link |
01:50:59.760
which is something we obviously have to stop. By the way, this is a hunter gatherers guide to
link |
01:51:06.000
the 21st century coauthored with your wife, Heather, being released this September. I believe you
link |
01:51:11.840
said you're going to do a little bit of a preview videos on each chapter leading up to the release.
link |
01:51:17.120
So I'm looking forward to the last chapter as well as all the previous one. I have a few
link |
01:51:23.520
questions on that. So you generally have faith to clarify that technology could be the thing
link |
01:51:31.760
that empowers this kind of future? Well, if you just let technology evolve,
link |
01:51:40.400
it's going to be our undoing, right? One of the things that I fault my libertarian friends for
link |
01:51:48.160
is this faith that the market is going to find solutions without destroying us. And my sense is
link |
01:51:53.600
I'm a very strong believer in markets, right? I believe in their power even above some market
link |
01:51:59.440
fundamentalists. But what I don't believe is that they should be allowed to plot our course,
link |
01:52:05.920
right? Markets are very good at figuring out how to do things. They are not good at all about
link |
01:52:11.440
figuring out what we should do, right? What we should want. We have to tell markets what we want,
link |
01:52:16.480
and then they can tell us how to do it best. And if we adopted that kind of pro market but
link |
01:52:23.280
in a context where it's not steering, where human well being is actually the driver,
link |
01:52:28.800
we can do remarkable things and the technology that emerges would naturally be enhancing of human
link |
01:52:34.640
well being. Perfectly so, no, but overwhelmingly so. But at the moment, markets are finding our
link |
01:52:41.040
every defective character and exploiting them and making huge profits and making us worse to each
link |
01:52:46.720
other in the process. Before we leave COVID 19, let me ask you about a very difficult topic,
link |
01:52:57.680
which is the vaccines. So I took the Pfizer vaccine, the two shots. You did not. You have
link |
01:53:07.760
been taking ivermectin. So one of the arguments against the discussion of ivermectin is that
link |
01:53:19.360
it prevents people from being fully willing to get the vaccine. How would you compare ivermectin
link |
01:53:27.760
and the vaccine for COVID 19? All right. That's a good question. I would say, first of all,
link |
01:53:34.480
there are some hazards with the vaccine that people need to be aware of. There are some things that
link |
01:53:39.280
we cannot rule out and for which there is some evidence. The two that I think people should
link |
01:53:45.360
be tracking is the possibility, some would say a likelihood that a vaccine of this nature,
link |
01:53:53.520
that is to say very narrowly focused on a single antigen, is an evolutionary pressure
link |
01:54:02.240
that will drive the emergence of variants that will escape the protection that comes from the
link |
01:54:07.200
vaccine. So this is a hazard. It is a particular hazard in light of the fact that these vaccines
link |
01:54:15.120
have a substantial number of breakthrough cases. So one danger is that a person who has been
link |
01:54:21.360
vaccinated will shed viruses that are specifically less visible or invisible to the immunity created
link |
01:54:29.600
by the vaccines. So we may be creating the next pandemic by applying the pressure of vaccines
link |
01:54:37.280
at a point that it doesn't make sense to. The other danger has to do with something
link |
01:54:42.160
called antibody dependent enhancement, which is something that we see in certain diseases
link |
01:54:47.120
like dengue fever. You may know that dengue one gets a case and then their second case is much
link |
01:54:53.200
more devastating. So break bone fever is when you get your second case of dengue and dengue
link |
01:54:58.960
effectively utilizes the immune response that is produced by prior exposure to attack the body
link |
01:55:05.600
in ways that it is incapable of doing before exposure. So this is apparently, this pattern
link |
01:55:10.720
has apparently blocked past efforts to make vaccines against coronaviruses, whether it
link |
01:55:17.680
will happen here or not. It is still too early to say, but before we even get to the question
link |
01:55:22.160
of harm done to individuals by these vaccines, we have to ask about what the overall impact is
link |
01:55:29.120
going to be. And it's not clear in the way people think it is that if we vaccinate enough people,
link |
01:55:33.680
the pandemic will end. It could be that we vaccinate people and make the pandemic worse.
link |
01:55:38.560
And while nobody can say for sure that that's where we're headed, it is at least something to
link |
01:55:43.120
be aware of. So don't vaccines usually create that kind of evolutionary pressure to create
link |
01:55:49.280
more deadlier different strains of the virus? So is there something particular with these mRNA
link |
01:55:58.000
vaccines that's uniquely dangerous in this regard? Well, it's not even just the mRNA vaccines. The
link |
01:56:03.440
mRNA vaccines and the adeno vector DNA vaccine all share the same vulnerability, which is they are
link |
01:56:09.840
very narrowly focused on one subunit of the spike protein. So that is a very concentrated
link |
01:56:16.080
evolutionary signal. We are also deploying it in mid pandemic, and it takes time for immunity
link |
01:56:22.400
to develop. So part of the problem here, if you inoculated a population before encounter with
link |
01:56:30.080
a pathogen, then there might be substantially enough immunity to prevent this phenomenon from
link |
01:56:36.400
happening. But in this case, we are inoculating people as they are encountering those who are
link |
01:56:42.480
sick with the disease. And what that means is the disease is now faced with a lot of opportunities
link |
01:56:48.400
to effectively, evolutionarily practice escape strategies. So one thing is the timing. The
link |
01:56:54.640
other thing is the narrow focus. Now, in a traditional vaccine, you would typically not have
link |
01:56:59.520
one antigen, right? You would have basically a virus full of antigens, and the immune system
link |
01:57:05.360
would therefore produce a broader response. So that is the case for people who have had COVID,
link |
01:57:11.440
right? They have an immunity that is broader because it wasn't so focused on one part of the
link |
01:57:15.360
spike protein. So anyway, there is something unique here. So these platforms create that
link |
01:57:20.960
special hazard. They also have components that we haven't used before in people. So for example,
link |
01:57:26.480
the lipid nanoparticles that coat the RNAs are distributing themselves around the body in a way
link |
01:57:33.840
that will have unknown consequences. So anyway, there's reason for concern. Is it possible for
link |
01:57:41.520
you to steal man the argument that everybody should get vaccinated? Of course. The argument that
link |
01:57:49.440
everybody should get vaccinated is that nothing is perfectly safe. Phase three trials showed
link |
01:57:56.480
good safety for the vaccines. Now, that may or may not be actually true, but what we saw suggested
link |
01:58:03.680
high degree of efficacy and a high degree of safety for the vaccines that inoculating people
link |
01:58:10.320
quickly and therefore dropping the landscape of available victims for the pathogen to a very low
link |
01:58:18.080
number so that herd immunity drives it to extinction requires us all to take our share of the risk
link |
01:58:23.680
and that because driving it to extinction should be our highest priority that really
link |
01:58:30.960
people shouldn't think too much about the various nuances because overwhelmingly fewer people will
link |
01:58:37.760
die if the population is vaccinated from the vaccine than will die from COVID if they're not
link |
01:58:43.360
vaccinated. And with the vaccine as it grows being deployed, that is a quite a likely scenario
link |
01:58:48.960
that everything, you know, the virus will fade away in the following sense that the probability
link |
01:58:55.760
that a more dangerous strain will be created is nonzero, but it's not 50%. It's something smaller.
link |
01:59:04.320
And so the most like, well, I don't know, maybe you disagree with that, but the scenario we're
link |
01:59:08.960
most likely to see now that the vaccine is here is that the virus is not going to be
link |
01:59:14.560
most likely to see now that the vaccine is here is that the virus will fade away.
link |
01:59:21.520
First of all, I don't believe that the probability of creating a worse pandemic is
link |
01:59:25.600
low enough to discount. I think the probability is fairly high. And frankly, we are seeing a wave
link |
01:59:31.600
of variants that we will have to do a careful analysis to figure out what exactly that has
link |
01:59:38.640
to do with campaigns of vaccination where they have been, where they haven't been, where the
link |
01:59:42.640
variants emerged from. But I believe that what we are seeing is a disturbing pattern
link |
01:59:46.880
that reflects that those who were advising caution may well have been right.
link |
01:59:51.680
The data here, by the way, and the small tangent is terrible.
link |
01:59:55.120
Terrible. Right. And why is it terrible is another question, right?
link |
01:59:59.520
This is where I started getting angry. Yes.
link |
02:00:01.440
It's like, there's an obvious opportunity for exceptionally good data, for exceptionally
link |
02:00:07.040
rigorous, even the website for self reporting side effects for, not side effects, but negative
link |
02:00:13.120
effects. Adverse events.
link |
02:00:14.320
Adverse events, sorry, for the vaccine. There's many things I could say from both the study
link |
02:00:21.440
perspective. But mostly, let me just put on my hat of HTML and web design. It's like the
link |
02:00:31.440
worst website. It makes it so unpleasant to report. It makes it so unclear what you're
link |
02:00:36.080
reporting. If somebody actually has serious effects, like if you have very mild effects,
link |
02:00:40.480
what are the incentives for you to even use that crappy website with many pages and forms
link |
02:00:46.240
that don't make any sense? If you have adverse effects, what are the incentives for you to
link |
02:00:50.640
use that website? What is the trust that you have that this information will be used?
link |
02:00:56.240
Well, all those kinds of things. And the data about who's getting vaccinated,
link |
02:01:01.040
anonymized data about who's getting vaccinated, where, when, with what vaccine, coupled with
link |
02:01:07.600
the adverse effects, all of that we should be collecting. Instead, we're completely not.
link |
02:01:13.200
We're doing it in a crappy way. And using that crappy data to make conclusions that you then
link |
02:01:18.560
twist, you're basically collecting in a way that can arrive at whatever conclusions you want.
link |
02:01:25.520
And the data is being collected by the institutions, by governments. And so therefore, it's obviously
link |
02:01:32.080
they're going to try to construct any kind of narratives they want based on this crappy data.
link |
02:01:36.480
It reminds me of much of psychology, the field that I love, but is flawed and many fundamental
link |
02:01:41.920
ways. So rent over, but coupled with the dangers that you're speaking to, we don't have even the
link |
02:01:48.160
data to understand the dangers. Yeah, I'm going to pick up on your rant and say we, estimates of
link |
02:01:57.600
the degree of underreporting in VAERS are that it is 10% of the real to 100% and that's the
link |
02:02:06.560
system for reporting. Yeah, the VAERS system is the system for reporting adverse events. So
link |
02:02:11.200
in the US, we have above 5,000 unexpected deaths that seem in time to be associated with vaccination.
link |
02:02:21.920
That is an undercount almost certainly. And by a large factor, we don't know how large I've seen
link |
02:02:29.360
estimates 25,000 dead in the US alone. Now, you can make the argument that, okay, that's a large
link |
02:02:38.480
number, but the necessity of immunizing the population to drive SARS CoV2 to extinction
link |
02:02:45.360
is such that it's an acceptable number. But I would point out that that actually does not
link |
02:02:50.000
make any sense. And the reason it doesn't make any sense is actually there are several reasons.
link |
02:02:54.240
One, if that was really your point that, yes, many, many people are going to die,
link |
02:02:59.840
but many more will die if we don't do this. Were that your approach, you would not be
link |
02:03:05.920
inoculating people who had had COVID 19, which is a large population. There's no reason to expose
link |
02:03:12.080
those people to danger. Their risk of adverse events in the case that they have them is greater.
link |
02:03:18.160
So there's no reason that we would be allowing those people to face a risk of death if this was
link |
02:03:22.720
really about an acceptable number of deaths arising out of this set of vaccines. I would also
link |
02:03:29.600
point out there's something incredibly bizarre. And I would, I struggle to find language that is
link |
02:03:36.160
strong enough for the horror of vaccinating children in this case, because children suffer a
link |
02:03:45.360
greater risk of long term effects because they are going to live longer. And because this is
link |
02:03:50.560
earlier in their development, therefore it impacts systems that are still forming,
link |
02:03:54.880
and they tolerate COVID well. And so the benefit to them is very small. And so the only argument
link |
02:04:03.280
for doing this is that they may cryptically be carrying more COVID than we think,
link |
02:04:07.440
and therefore they may be integral to the way the virus spreads to the population.
link |
02:04:11.760
But if that's the reason that we are inoculating children, and there has been some revision in
link |
02:04:15.520
the last day or two about the recommendation on this because of the adverse events that have
link |
02:04:20.000
shown up in children, but to the extent that we were vaccinating children, we were doing it
link |
02:04:25.760
to protect old, infirm people who are the most likely to succumb to COVID 19.
link |
02:04:32.720
What society puts children in danger, robbs children of life to save old,
link |
02:04:40.000
infirm people? That's upside down. So there's something about the way we are going about
link |
02:04:45.840
vaccinating, who we are vaccinating, what dangers we are pretending don't exist, that suggests that
link |
02:04:53.360
to some set of people, vaccinating people is a good in and of itself, that that is the objective
link |
02:04:59.600
of the exercise, not herd immunity. And the last thing, sorry, I don't want to prevent you from
link |
02:05:04.560
jumping in here, but the second reason, in addition to the fact that we're exposing people to danger
link |
02:05:09.040
that we should not be exposing them to.
link |
02:05:11.680
By the way, as a tiny tangent, another huge part of this soup that should have been part of it,
link |
02:05:17.360
that's an incredible solution, is large scale testing. But that might be another couple of
link |
02:05:23.920
hour conversation, but there's these solutions that are obvious, that were available from the
link |
02:05:29.360
very beginning. So you could argue that Iverrectin is not that obvious, but maybe the whole point
link |
02:05:36.720
is you have aggressive, very fast research that leads to meta analysis and then large scale
link |
02:05:43.840
production and deployment. Okay, at least that possibility should be seriously considered,
link |
02:05:51.040
coupled with a serious consideration of large scale deployment of testing, at home testing,
link |
02:05:56.800
that could have accelerated the speed at which we reached that herd immunity. But I don't even want to...
link |
02:06:08.480
Well, let me just say, I am also completely shocked that we did not get on high quality
link |
02:06:13.920
testing early and that we are still suffering from this even now, because just the simple
link |
02:06:19.760
ability to track where the virus moves between people would tell us a lot about its mode of
link |
02:06:25.040
transmission, which would allow us to protect ourselves better. Instead, that information was
link |
02:06:31.040
hard won and for no good reason. So I also find this mysterious.
link |
02:06:35.760
You've spoken with Eric Weinstein, your brother, on his podcast, The Portal, about the ideas that
link |
02:06:43.840
eventually led to the paper you published titled, The Reserved Capacity Hypothesis.
link |
02:06:48.240
I think first, can you explain this paper and the ideas that led up to it?
link |
02:06:59.040
Sure. Easier to explain the conclusion of the paper. There's a question about why a creature
link |
02:07:08.480
that can replace its cells with new cells grows feeble and inefficient with age. We call that
link |
02:07:15.680
process, which is otherwise called aging. We call it senescence. And senescence in this paper,
link |
02:07:24.400
it is hypothesized, is the unavoidable downside of a cancer prevention
link |
02:07:32.880
feature of our bodies, that each cell has a limit on the number of times it can divide.
link |
02:07:40.640
There are a few cells in the body that are exceptional, but most of our cells can only
link |
02:07:44.560
divide a limited number of times. That's called the Hayflick limit. And the Hayflick limit
link |
02:07:50.160
reduces the ability of the organism to replace tissues. It therefore
link |
02:07:56.720
results in a failure over time of maintenance and repair. And that explains why we become
link |
02:08:03.680
decrepit as we grow old. The question was, why would that be, especially in light of the fact
link |
02:08:10.720
that the mechanism that seems to limit the ability of cells to reproduce is something
link |
02:08:17.600
called a telomere. Telomere is not a gene, but it's a DNA sequence at the ends of our chromosomes
link |
02:08:24.720
that is just simply repetitive. And the number of repeats functions like a counter.
link |
02:08:30.080
So there's a number of repeats that you have after development is finished. And then each
link |
02:08:34.720
time the cell divides, a little bit of telomere is lost. And at the point that the telomere becomes
link |
02:08:39.040
critically short, the cell stops dividing even though it still has the capacity to do so.
link |
02:08:44.480
It stops dividing and it starts transcribing different genes than it did when it had more
link |
02:08:48.880
telomere. So what my work did was it looked at the fact that the telomeric shortening was being
link |
02:08:56.160
studied by two different groups. It was being studied by people who were interested in counteracting
link |
02:09:02.080
the aging process. And it was being studied in exactly the opposite fashion by people who were
link |
02:09:07.120
interested in tumorogenesis and cancer. The thought being, because it was true that when
link |
02:09:12.720
one looked into tumors, they always had telomerase active. That's the enzyme that lengthens our
link |
02:09:18.080
telomeres. So those folks were interested in bringing about a halt to the lengthening of
link |
02:09:25.040
telomeres in order to counteract cancer. And the folks who were studying the senescence process
link |
02:09:30.400
were interested in lengthening telomeres in order to generate greater repair capacity.
link |
02:09:36.000
And my point was evolutionarily speaking, this looks like a pleiotropic effect that the
link |
02:09:45.120
genes which create the tendency of the cells to be limited in their capacity to replace themselves
link |
02:09:53.120
are providing a benefit in youth, which is that we are largely free of tumors and cancer
link |
02:09:59.120
at the inevitable late life cost that we grow feeble and inefficient and eventually die.
link |
02:10:04.080
And that matches a very old hypothesis in evolutionary theory by somebody I was fortunate
link |
02:10:11.760
enough to know, George Williams, one of the great 20th century evolutionists who argued
link |
02:10:17.440
that senescence would have to be caused by pleiotropic genes that cause early life benefits
link |
02:10:23.600
at unavoidable late life costs. And although this isn't the exact nature of the system he
link |
02:10:29.280
predicted, it matches what he was expecting in many regards to a shocking degree.
link |
02:10:35.760
That said, the focus of the paper is about the, let me just read the abstract.
link |
02:10:43.760
We observed that captive rodent breeding protocols designed at the end of the abstract.
link |
02:10:48.960
We observed that captive rodent breeding protocols designed to increase reproductive
link |
02:10:52.960
output simultaneously exert strong selection against reproductive senescence and virtually
link |
02:10:58.640
eliminate selection that would otherwise favor tumor suppression. This appears to have greatly
link |
02:11:04.960
elongated the telomeres of laboratory mice where their telomeric failsafe effectively disabled.
link |
02:11:10.560
These animals are unreliable models of normal senescence and tumor formation.
link |
02:11:15.120
So basically using these mice is not going to lead to the right kinds of conclusions.
link |
02:11:21.440
Safety tests employing these animals likely overestimate cancer risks and underestimate
link |
02:11:27.600
tissue damage and consequent accelerated senescence. So I think, especially with your
link |
02:11:35.760
discussion with Eric, the conclusion of this paper has to do with the fact that we shouldn't be
link |
02:11:44.080
using these mice to test the safety or to make conclusions about cancer or senescence. Is that
link |
02:11:53.360
the basic takeaway? Basically saying that the length of these telomeres is an important variable
link |
02:11:58.880
to consider. Well, let's put it this way. I think there was a reason that the world of scientists
link |
02:12:05.600
who was working on telomeres did not spot the pleiotropic relationship that was the key argument
link |
02:12:14.320
in my paper. The reason they didn't spot it was that there was a result that everybody knew which
link |
02:12:20.720
seemed inconsistent. The result was that mice have very long telomeres, but they do not have
link |
02:12:28.080
very long lives. Now, we can talk about what the actual meaning of don't have very long lives is,
link |
02:12:34.400
but in the end, I was confronted with a hypothesis that would explain a great many features of the
link |
02:12:41.680
way mammals and indeed vertebrates age, but it was inconsistent with one result. And at first,
link |
02:12:47.120
I thought maybe there's something wrong with the result. Maybe this is one of these cases where
link |
02:12:51.600
the result was achieved once through some bad protocol and everybody else was repeating it.
link |
02:12:57.520
Didn't turn out to be the case. Many laboratories had established that mice had ultra long telomeres.
link |
02:13:02.720
And so I began to wonder whether or not there was something about the breeding protocols
link |
02:13:09.440
that generated these mice. And what that would predict is that the mice that have long telomeres
link |
02:13:14.880
would be laboratory mice and that wild mice would not. And Carol Greider, who agreed to collaborate
link |
02:13:22.000
with me, tested that hypothesis and showed that it was indeed true that wild derived mice or at
link |
02:13:29.280
least mice that had been in captivity for a much shorter period of time did not have ultra long
link |
02:13:33.600
telomeres. Now, what this implied though, as you read, is that our breeding protocols generate
link |
02:13:41.760
lengthening of telomeres. And the implication of that is that the animals that have these very
link |
02:13:46.320
long telomeres will be hyper prone to create tumors. They will be extremely resistant to
link |
02:13:53.600
toxins because they have effectively an infinite capacity to replace any damaged tissue. And so
link |
02:13:59.760
ironically, if you give one of these ultra long telomere lab mice a toxin, if the toxin doesn't
link |
02:14:07.360
outright kill it, it may actually increase its lifespan because it functions as a kind of chemotherapy.
link |
02:14:14.240
So the reason that chemotherapy works is that dividing cells are more vulnerable than cells
link |
02:14:19.520
that are not dividing. And so if this mouse has effectively had its cancer protection turned off
link |
02:14:26.080
and it has cells dividing too rapidly and you give it a toxin, you will slow down its tumors
link |
02:14:31.360
faster than you harm its other tissues. And so you'll get a paradoxical result that actually
link |
02:14:36.560
some drug that's toxic seems to benefit the mouse. Now, I don't think that that was understood
link |
02:14:43.040
before I published my paper. Now I'm pretty sure it has to be. And the problem is that this actually
link |
02:14:48.800
is a system that serves pharmaceutical companies that have the difficult job of bringing compounds
link |
02:14:56.080
to market, many of which will be toxic, maybe all of them will be toxic. And these mice predispose
link |
02:15:04.160
our system to declare these toxic compounds safe. And in fact, I believe we've seen the errors that
link |
02:15:10.480
result from using these mice a number of times, most famously with Vioxx, which turned out to do
link |
02:15:15.840
conspicuous heart damage. Why do you think this paper on this idea has not gotten significant
link |
02:15:22.240
traction? Well, my collaborator, Carol Greider, said something to me that rings in my ears to this
link |
02:15:31.280
day. She initially, after she showed that laboratory mice have anomalously long telomeres and that
link |
02:15:37.520
wild mice don't have long telomeres, I asked her where she was going to publish that result so that
link |
02:15:42.400
I could cite it in my paper. And she said that she was going to keep the result in house rather than
link |
02:15:48.640
publish it. And at the time, I was a young graduate student, I didn't really understand what she was
link |
02:15:55.760
saying. But in some sense, the knowledge that a model organism is broken in a way that creates
link |
02:16:04.000
the likelihood that certain results will be reliably generatable, you can publish a paper and make a
link |
02:16:10.000
big splash with such a thing, or you can exploit the fact that you know how those models will
link |
02:16:14.960
misbehave and other people don't. So there's a question, if somebody is motivated cynically,
link |
02:16:21.520
and what they want to do is appear to have deeper insight into biology because they predict things
link |
02:16:26.720
better than others do, knowing where the flaw is so that your predictions come out true is
link |
02:16:32.960
advantageous. At the same time, I can't help but imagine that the pharmaceutical industry,
link |
02:16:39.440
when it figured out that the mice were predisposed to suggest that drugs were safe, didn't leap to
link |
02:16:46.000
leap to fix the problem, because in some sense, it was the perfect cover for the difficult job of
link |
02:16:52.800
bringing drugs to market and then discovering their actual toxicity profile, right? This made
link |
02:16:58.000
things look safer than they were. And I believe a lot of profits have likely been generated downstream.
link |
02:17:03.920
So to kind of play devil's advocate, it's also possible that this particular the length of
link |
02:17:10.640
the telomeres is not a strong variable for the conclusions for the drug development and for
link |
02:17:15.280
the conclusions that Carol and others have been studying. Is it possible for that to be the case?
link |
02:17:22.720
So one reason she and others could be ignoring this is because it's not a strong variable.
link |
02:17:29.360
Well, I don't believe so. And in fact, at the point that I went to publish my paper,
link |
02:17:34.080
Carol published her result. She did so in a way that did not make a huge splash.
link |
02:17:39.120
I apologize if I don't know how. What was the emphasis of her publication of that paper? Was
link |
02:17:49.920
it purely just kind of showing data? Or was there more? Because in your paper, there's a kind of
link |
02:17:54.640
more of a philosophical statement as well. Well, my paper was motivated by interest in the evolutionary
link |
02:18:02.000
dynamics around senescence. I wasn't pursuing grants or anything like that. I was just working
link |
02:18:08.160
on a puzzle I thought was interesting. Carol has, of course, gone on to win a Nobel Prize for her
link |
02:18:15.440
co discovery with Elizabeth Greider of telomerase, the enzyme that lengthens telomeres.
link |
02:18:21.600
But anyway, she's a heavy hitter in the academic world. I don't know exactly what her purpose was.
link |
02:18:27.680
I do know that she told me she wasn't planning to publish. And I do know that I discovered that
link |
02:18:31.680
she was in the process of publishing very late. And when I asked her to send me the paper to see
link |
02:18:37.200
whether or not she had put evidence in it that the hypothesis had come from me, she grudgingly
link |
02:18:44.320
sent it to me. And my name was nowhere mentioned. And she broke contact at that point. What it is
link |
02:18:51.600
that motivated her, I don't know. But I don't think it can possibly be that this result is
link |
02:18:55.760
unimportant. The fact is, the reason I called her in the first place and established contact that
link |
02:19:02.320
generated our collaboration was that she was a leading light in the field of
link |
02:19:07.440
telomeric studies. And because of that, this question about whether the model organisms
link |
02:19:14.000
are distorting the understanding of the functioning of telomeres is central.
link |
02:19:20.880
Do you feel like you've been, as a young graduate student, do you think Carol or do you think the
link |
02:19:27.600
scientific community broadly screwed you over in some way?
link |
02:19:31.200
You know, I don't think of it in those terms, probably partly because it's not productive.
link |
02:19:37.360
But I have a complex relationship with this story. On the one hand, I'm livid with Carol
link |
02:19:44.480
Greider for what she did. She absolutely pretended that I didn't exist in this story.
link |
02:19:49.920
And I don't think I was a threat to her. My interest was as an evolutionary biologist,
link |
02:19:54.560
I had made an evolutionary contribution. She had tested a hypothesis. And frankly,
link |
02:19:59.440
I think it would have been better for her if she had acknowledged what I had done.
link |
02:20:03.680
I think it would have enhanced her work. And you know, I was, let's put it this way,
link |
02:20:10.400
when I watched her Nobel lecture, and I should say there's been a lot of confusion about this
link |
02:20:14.240
Nobel stuff, I've never said that I should have gotten a Nobel Prize. People have misportrayed that.
link |
02:20:20.080
My in listening to her lecture, I had one of the most bizarre emotional experiences of my life,
link |
02:20:30.080
because she presented the work that resulted from my hypothesis. She presented it as she had in her
link |
02:20:38.000
paper with no acknowledgement of where it had come from. And she had, in fact, portrayed the
link |
02:20:46.240
distortion of the telomeres as if it were a lucky fact, because it allowed testing hypotheses
link |
02:20:53.040
that would otherwise not be testable. You have to understand, as a young scientist,
link |
02:21:00.560
to watch work that you have done presented in what's surely the most important lecture of her
link |
02:21:08.000
career. It's thrilling. It was thrilling to see her figures projected on the screen there,
link |
02:21:18.480
to have been part of work that was important enough for that felt great. And of course,
link |
02:21:23.040
to be erased from the story felt absolutely terrible. So anyway, that's sort of where I am
link |
02:21:29.680
with it. My sense is, what I'm really troubled by in the story is the fact that as far as I know,
link |
02:21:41.760
the flaw with the mice has not been addressed. And actually, Eric did some looking into this. He
link |
02:21:48.000
tried to establish by calling the Jax lab and trying to ascertain what had happened with the
link |
02:21:53.680
colonies, whether any change in protocol had occurred. And he couldn't get anywhere. There
link |
02:21:58.640
was seemingly no awareness that it was even an issue. So I'm very troubled by the fact that as
link |
02:22:05.360
a father, for example, I'm in no position to protect my family from the hazard that I believe
link |
02:22:11.840
lurks in our medicine cabinets. Even though I'm aware of where the hazard comes from,
link |
02:22:17.200
it doesn't tell me anything useful about which of these drugs will turn out to do damage if that
link |
02:22:21.360
is ultimately tested. And that's a very frustrating position to be in. On the other hand, there's
link |
02:22:27.520
a part of me that's even still grateful to Carol for taking my call. She didn't have to take my
link |
02:22:32.560
call and talk to some young graduate student who had some evolutionary idea that wasn't
link |
02:22:37.840
in her wheelhouse specifically. And yet she did. And for a while, she was a good collaborator.
link |
02:22:43.600
So can I have to proceed carefully here? It's a complicated topic. She took the call and you're
link |
02:22:58.560
saying that she basically erased credit pretending you didn't exist in a certain sense.
link |
02:23:08.080
Let me phrase it this way. As a research scientist at MIT, I've had especially just
link |
02:23:20.560
part of a large set of collaborations. I've had a lot of students come to me
link |
02:23:28.320
and talk to me about ideas, perhaps less interesting than what we're discussing here
link |
02:23:33.520
in the space of AI that I've been thinking about anyway. In general, everything I'm doing with
link |
02:23:40.640
robotics, people have told me a bunch of ideas that I'm already thinking about. The point is
link |
02:23:50.240
taking that idea, see, this is different because the idea has more power in the space that we're
link |
02:23:55.440
talking about here. And robotics is like your idea means shit until you build it. So the
link |
02:24:00.640
engineering world is a little different. But there's a kind of sense that I probably forgot
link |
02:24:07.840
a lot of brilliant ideas that have been told to me. Do you think she pretended you don't exist?
link |
02:24:14.640
Do you think she was so busy that she kind of forgot? She has like the stream of brilliant
link |
02:24:21.920
people around her that there's a bunch of ideas that are swimming in the air. And you just kind
link |
02:24:27.120
of forget people that are a little bit on the periphery on the idea generation. Or is it some
link |
02:24:32.560
mix of both? It's not a mix of both. I know that because we corresponded. She put a graduate
link |
02:24:40.560
student on this work. He emailed me excitedly when the results came in. So there was no ambiguity
link |
02:24:48.400
about what had happened. What's more, when I went to publish my work, I actually sent it to Carol
link |
02:24:54.160
in order to get her feedback because I wanted to be a good collaborator to her. And she absolutely
link |
02:25:02.080
panned it, made many critiques that were not valid. But it was clear at that point that she became
link |
02:25:08.240
an antagonist. And none of this adds that she couldn't possibly have forgotten the conversation.
link |
02:25:16.800
I believe I even sent her tissues at some point in part, not related to this project,
link |
02:25:22.880
but as a favor, she was doing another project that involved telomeres. And she needed samples that
link |
02:25:27.120
I could get a hold of because of the Museum of Zoology that I was in. So this was not a one off
link |
02:25:33.360
conversation. I certainly know that those sorts of things can happen, but that's not what happened
link |
02:25:37.040
here. This was a relationship that existed and then was suddenly cut short at the point that she
link |
02:25:44.720
published her paper by surprise without saying where the hypothesis had come from
link |
02:25:48.240
and began to be a opposing force to my work.
link |
02:25:53.920
Is there, there's a bunch of trajectories that could have taken through life.
link |
02:26:00.240
Do you think about the trajectory of being a researcher of then going to war in the space
link |
02:26:09.120
of ideas of publishing further papers along this line? I mean, that's often the dynamic of
link |
02:26:16.320
that fascinating space is you have a junior researcher with brilliant ideas and a senior
link |
02:26:22.400
researcher that that starts out as a mentor that becomes a competitor. I mean, that, that,
link |
02:26:26.560
that happens. But then the way to, it's an almost an opportunity to shine is to publish a bunch
link |
02:26:34.640
more papers in this place, like to tear it apart, to dig into, like really make it a war of ideas.
link |
02:26:42.480
Did you consider that possible trajectory? I did have a couple of things to say about it.
link |
02:26:48.320
One, this work was not central for me. I took a year on the telomere project because something
link |
02:26:55.440
fascinating occurred to me and I pursued it. And the more I pursued it, the clearer it was.
link |
02:27:00.080
There was something there, but it wasn't the focus of my graduate work. And I didn't want to become
link |
02:27:06.720
a telomere researcher. What I want to do is to be an evolutionary biologist who upgrades the toolkit
link |
02:27:13.680
of evolutionary concepts so that we can see more clearly how organisms function and why.
link |
02:27:20.080
And telomeres was a proof of concept, right? That paper was a proof of concept that the
link |
02:27:26.400
toolkit in question works as for the need to pursue it further. I think it's kind of
link |
02:27:36.560
absurd and you're not the first person to say, maybe that was the way to go about it. But the
link |
02:27:40.160
basic point is, look, the work was good. It turned out to be highly predictive. Frankly,
link |
02:27:47.360
the model of senescence that I presented is now widely accepted. And I don't feel any misgivings
link |
02:27:54.880
at all about having spent a year on it, said my piece and moved on to other things, which frankly,
link |
02:28:00.320
I think are bigger. I think there's a lot of good to be done and it would be a waste to get
link |
02:28:05.840
overly, narrowly focused. There's so many ways through the space of science and the most common
link |
02:28:14.960
ways to just publish a lot. Publish a lot of papers, do these incremental work and exploring the
link |
02:28:21.040
space, kind of like ants looking for food. You're tossing out a bunch of different ideas. Some of
link |
02:28:26.880
them could be brilliant breakthrough ideas, nature. Some of them are more conference kind of publications,
link |
02:28:31.840
all those kinds of things. Did you consider that kind of path in science?
link |
02:28:38.480
Of course I considered it, but I must say the experience of having my first encounter with
link |
02:28:44.640
the process of peer review be this story, which was frankly a debacle from one end to the other
link |
02:28:52.320
with respect to the process of publishing. It was not a very good sales pitch for
link |
02:28:59.040
trying to make a difference through publication. I would point out part of what I ran into and
link |
02:29:03.360
I think frankly part of what explains Carol's behavior is that in some parts of science,
link |
02:29:10.560
there is this dynamic where PIs parasitize their underlings and if you're very, very good,
link |
02:29:19.120
you rise to the level where one day instead of being parasitized, you get to parasitize others.
link |
02:29:25.120
Now I find that scientifically despicable and it wasn't the culture of the lab I grew up in
link |
02:29:30.320
at all, my lab. In fact, the PI, Dick Alexander, who's now gone, but who was an incredible
link |
02:29:37.600
mind and a great human being, he didn't want his graduate students working on the same topics he
link |
02:29:43.680
was on. Not because it wouldn't have been useful and exciting, but because in effect he did not
link |
02:29:50.000
want any confusion about who had done what because he was a great mentor and the idea was actually
link |
02:29:57.680
a great mentor is not stealing ideas and you don't want people thinking that they are. So
link |
02:30:04.400
anyway, my point would be I wasn't up for being parasitized. I don't like the idea that if you
link |
02:30:13.440
are very good, you get parasitized until it's your turn to parasitize others. That doesn't
link |
02:30:18.000
make sense to me. A crossing over from evolution into cellular biology may have exposed me to that.
link |
02:30:25.280
That may have been par for the course, but it doesn't make it acceptable. And I would also
link |
02:30:30.400
point out that my work falls in the realm of synthesis. My work generally takes evidence
link |
02:30:39.600
accumulated by others and places it together in order to generate hypotheses that explain sets
link |
02:30:47.920
of phenomena that are otherwise intractable. And I am not sure that that is best done with
link |
02:30:55.600
narrow publications that are read by few. And in fact, I would point to the very conspicuous
link |
02:31:02.320
example of Richard Dawkins, who I must say I've learned a tremendous amount from and I greatly
link |
02:31:07.120
admire. Dawkins has almost no publication record in the sense of peer reviewed papers in journals.
link |
02:31:15.600
What he's done instead is done synthetic work and he's published it in books which are not
link |
02:31:20.480
peer reviewed in the same sense. And frankly, I think there's no doubting his contribution
link |
02:31:25.920
to the field. So my sense is if Richard Dawkins can illustrate that one can make contributions
link |
02:31:33.600
to the field without using journals as the primary mechanism for distributing what you've come to
link |
02:31:39.760
understand, then it's obviously a valid mechanism and it's a far better one from the point of view
link |
02:31:44.560
of accomplishing what I want to accomplish. Yeah, it's really interesting. There is, of course,
link |
02:31:48.400
several levels. You can do the kind of synthesis and that does require a lot of both broad and
link |
02:31:53.920
deep thinking is exceptionally valuable. You could also, I'm working on something with Andrew
link |
02:31:59.120
Huberman now. You can also publish synthesis that's like review papers. They're exceptionally
link |
02:32:04.160
valuable for the communities. It brings the community together, tells a history, tells a
link |
02:32:09.600
story where the community has been. It paints a picture of where the path lays for the future.
link |
02:32:14.240
I think it's really valuable. And Richard Dawkins is a good example of somebody that does that in
link |
02:32:18.240
book form, that he kind of walks the line really interestingly. You have like somebody who like
link |
02:32:24.880
Neil deGrasse Tyson, who's more like a science communicator. Richard Dawkins sometimes is a
link |
02:32:29.920
science communicator, but he gets like close to the technical to where it's a little bit,
link |
02:32:35.520
it's not shying away from being really a contribution to science. No, he's made real
link |
02:32:42.480
contributions in book form. Yes, he really has. It's fascinating. Roger Pernarose,
link |
02:32:49.040
I mean, similar kind of idea. That's interesting. That's interesting. Synthesis does not, especially
link |
02:32:54.480
synthesis work. Work that synthesizes ideas does not necessarily need to be peer reviewed.
link |
02:33:02.720
It's peer reviewed by peers reading it. Well, and reviewing it. That's it. It is reviewed by peers,
link |
02:33:11.520
which is not synonymous with peer review. And that's the thing is people don't understand
link |
02:33:15.360
that the two things aren't the same, right? Peer review is an anonymous process that happens
link |
02:33:20.960
before publication in a place where there is a power dynamic. I mean, the joke, of course,
link |
02:33:27.360
is that peer review is actually peer preview, right? Your biggest competitors get to see
link |
02:33:32.240
your work before it sees the light of day and decide whether or not it gets published. And
link |
02:33:38.480
you know, again, when you're formative experience with the publication apparatus is the one I had
link |
02:33:43.280
with the telomere paper, there's no way that that seems like the right way to advance important
link |
02:33:49.200
ideas. And you know, what's the harm in publishing them so that your peers have to review them in
link |
02:33:55.360
public where they actually, if they're going to disagree with you, they actually have to take
link |
02:33:59.440
the risk of saying, I don't think this is right. And here's why, right? With their name on it.
link |
02:34:04.320
I'd much rather that it's not that I don't want my work reviewed by peers, but I want it done
link |
02:34:08.320
in the open, you know, for the same reason you don't meet with dangerous people in private,
link |
02:34:13.360
you meet at the cafe, I want the work reviewed out in public.
link |
02:34:18.640
Can I ask you a difficult question? Sure.
link |
02:34:23.600
There is popularity in martyrdom. There's popularity in pointing out that the emperor has no clothes.
link |
02:34:31.760
That, that can become a drug in itself.
link |
02:34:40.880
I've confronted this in scientific work I've done at MIT,
link |
02:34:46.800
where there are certain things they're not done well. People are not being the best version of
link |
02:34:51.360
themselves. And particular aspects of a particular field are in need of a revolution.
link |
02:35:03.200
And part of me wanted to point that out versus doing the hard work of publishing papers and
link |
02:35:12.480
doing the revolution, basically just pointing out, look, you guys are doing it wrong and then
link |
02:35:18.320
just walking away. Are you aware of the drug of martyrdom, of the ego involved in it,
link |
02:35:29.680
that it can cloud your thinking? Probably one of the best questions I've ever been asked.
link |
02:35:35.680
So let me, let me try to sort it out. First of all, we are all mysteries to ourselves at some
link |
02:35:43.040
level. So there's possible there's stuff going on in me that I'm not aware of that's driving.
link |
02:35:48.160
But in general, I would say one of my better strengths is that I'm not especially ego driven.
link |
02:35:55.120
I have an ego. I clearly think highly of myself, but it is not driving me. I do not crave that
link |
02:36:01.680
kind of validation. I do crave certain things. I do love a good Eureka moment. There is something
link |
02:36:08.640
great about it. And there's something even better about the phone calls you make next when you share
link |
02:36:12.960
it. It's pretty fun. I really like it. I also really like my subject. There's something about
link |
02:36:22.240
a walk in the forest when you have a toolkit in which you can actually look at creatures and see
link |
02:36:27.920
something deep. I like it. That drives me. And I could entertain myself for the rest of my life.
link |
02:36:37.200
If I was somehow isolated from the rest of the world, but I was in a place that was biologically
link |
02:36:41.520
interesting, you know, hopefully I would be with people that I love and pets that I love,
link |
02:36:47.600
believe it or not. But, you know, if I were in that situation and I could just go out every day
link |
02:36:52.160
and look at cool stuff and figure out what it means, I could be all right with that.
link |
02:36:56.560
So I'm not heavily driven by the ego thing as you put it. So I am completely the same except
link |
02:37:05.760
that instead of the pets, I would put robots. But so it's not, it's the Eureka. It's the exploration
link |
02:37:11.920
of the subject that brings you joy and fulfillment. It's not the ego.
link |
02:37:17.680
Well, there's more to say. No, I really don't think it's the ego thing. I will say I also have
link |
02:37:22.640
kind of a secondary passion for robot stuff. I've never made anything useful, but I do believe,
link |
02:37:28.880
I believe I found my calling. But if this wasn't my calling, my calling would have been
link |
02:37:33.680
inventing stuff. I really enjoy that too. So I get what you're saying about the analogy quite
link |
02:37:39.280
well. As far as the martyrdom thing, I understand the drug you're talking about, and I've seen it
link |
02:37:49.120
more than I felt it. I do, if I'm just to be completely candid and that this question is
link |
02:37:54.800
so good, it deserves a candid answer, I do like the fight. I like fighting against people I don't
link |
02:38:04.080
respect and I like winning. But I have no interest in martyrdom. One of the reasons I have no
link |
02:38:11.120
interest in martyrdom is that I'm having too good a time. I very much enjoy my life and
link |
02:38:17.520
such a good answer. I have a wonderful wife. I have amazing children. I live in a lovely place.
link |
02:38:25.920
I don't want to exit any quicker than I have to. That said, I also believe in things and a
link |
02:38:32.560
willingness to exit if that's the only way is not exactly inviting martyrdom, but it is an
link |
02:38:38.480
acceptance that fighting is dangerous and going up against powerful forces means who knows what
link |
02:38:44.640
will come of it, right? I don't have the sense that the thing is out there that used to kill
link |
02:38:50.400
inconvenient people. I don't think that's how it's done anymore. It's primarily done through
link |
02:38:54.800
destroying them reputationally, which is not something I relish the possibility of. But
link |
02:39:01.360
there is a difference between a willingness to face the hazard rather than a desire to face it
link |
02:39:11.280
because of the thrill, right? For me, the thrill is in fighting when I'm in the right. I think I
link |
02:39:20.320
feel that that is a worthwhile way to take what I see as the kind of brutality that is built into
link |
02:39:28.800
men and to channel it to something useful, right? If it is not channeled into something useful,
link |
02:39:35.120
it will be channeled into something else. So it damn well better be channeled into something
link |
02:39:38.400
useful. It's not motivated by fame and popularity, those kinds of things. You're just making me
link |
02:39:45.360
realize that enjoying the fight, fighting the powerful and idea that you believe is right,
link |
02:39:55.280
is a kind of optimism for the human spirit. It's like we can win this.
link |
02:40:03.760
It's almost like you're turning into action, into personal action, this hope for humanity
link |
02:40:13.360
by saying we can win this. That makes you feel good about the rest of humanity. If there's people
link |
02:40:23.760
like me, then we're going to be okay. Even if your ideas might be wrong or not, but if you believe
link |
02:40:32.080
they're right and you're fighting the powerful against all odds, then we're going to be okay.
link |
02:40:42.320
If I were to project, because I enjoy the fight as well, I think that's the way I,
link |
02:40:48.240
that's what brings me joy, is it's almost like it's optimism in action.
link |
02:40:56.000
Well, it's a little different for me. And again, I recognize you. You're familiar,
link |
02:41:01.600
your construction is familiar, even if it isn't mine, right? For me, I actually expect us not to
link |
02:41:08.880
be okay. And I'm not okay with that. But what's really important, if I feel like what I've said is
link |
02:41:16.240
I don't know of any reason that it's too late. As far as I know, we could still save humanity and
link |
02:41:22.000
we could get to the fourth frontier or something akin to it. But I expect us not to, I expect us
link |
02:41:27.280
to fuck it up, right? I don't like that thought, but I've looked into the abyss and I've done my
link |
02:41:33.040
calculations and the number of ways we could not succeed are many and the number of ways that we
link |
02:41:39.680
could manage to get out of this very dangerous phase of history is small. But the thing I don't
link |
02:41:45.360
have to worry about is that I didn't do enough, right? That I was a coward, that I, you know,
link |
02:41:54.080
prioritized other things. At the end of the day, I think I will be able to say to myself, and in
link |
02:42:00.000
fact, the thing that allows me to sleep is that when I saw clearly what needed to be done, I tried
link |
02:42:05.920
to do it to the extent that it was in my power. And, you know, if we fail, as I expect us to,
link |
02:42:12.640
I can't say, well, geez, that's on me, you know, and, you know, frankly, I regard what I just said
link |
02:42:18.000
to you as something like a personality defect, right? I'm trying to free myself from the sense
link |
02:42:24.080
that this is my fault. On the other hand, my guess is that personality defect is probably
link |
02:42:29.520
good for humanity, right? It's a good one for me to have it, you know, the externalities of it
link |
02:42:34.960
are positive, so I don't feel too bad about it. Yeah, that's funny. So, yeah, our perspective on the
link |
02:42:41.760
world are different, but they rhyme, like you said, because I've also looked into the abyss, and it
link |
02:42:48.960
kind of smiled nervously back. So, I have a more optimistic sense that we're going to win more than
link |
02:42:56.400
likely we're going to be okay. Right there with you, brother. I'm hoping you're right. I'm expecting
link |
02:43:02.160
me to be right. But back to Eric, you had a wonderful conversation. In that conversation,
link |
02:43:08.720
he played the big brother role, and he was very happy about it. He was self congratulatory about
link |
02:43:15.120
it. I mean, can you talk to the ways in which Eric made your better man throughout your life?
link |
02:43:23.920
Yeah, hell yeah. I mean, for one thing, you know, Eric and I are interestingly similar in some ways
link |
02:43:30.880
and radically different in some other ways. And, you know, it's often a matter of fascination to
link |
02:43:35.760
people who know us both because almost always people meet one of us first, and they sort of get
link |
02:43:40.160
used to that thing, and then they meet the other, and it throws the model into chaos. But, you know,
link |
02:43:44.880
I had a great advantage, which is I came second, right? So, although it was kind of a pain in the
link |
02:43:51.280
ass to be born into a world that had Eric in it because he's a force of nature, right? It was
link |
02:43:56.080
also terrifically useful because, A, he was a very awesome older brother who, you know, made
link |
02:44:04.160
interesting mistakes, learned from them and conveyed the wisdom of what he had discovered,
link |
02:44:08.720
and that was, you know, I don't know who else ends up so lucky as to have that kind of person
link |
02:44:16.000
blazing the trail. It also probably, you know, my hypothesis for what birth order effects are
link |
02:44:24.720
is that they're actually adaptive, right? That the reason that a second born is different than a
link |
02:44:30.800
first born is that they're not born into a world with the same niches in it, right? And so, the
link |
02:44:35.840
thing about Eric is he's been completely dominant in the realm of fundamental thinking, right? Like,
link |
02:44:44.400
what he's fascinated by is the fundamental of fundamentals, and he's excellent at it, which
link |
02:44:49.840
meant that I was born into a world where somebody was becoming excellent in that, and for me to be
link |
02:44:54.640
anywhere near the fundamental of fundamentals was going to be pointless, right? I was going to be
link |
02:44:59.040
playing second fiddle forever. And I think that that actually drove me to the other end of the
link |
02:45:03.920
continuum between fundamental and emergent. And so, I became fascinated with biology and have
link |
02:45:09.520
been since I was three years old, right? I think Eric drove that, and I have to thank him for it
link |
02:45:16.880
because, you know, I mean... Oh, I never thought of... So, Eric drives towards the fundamental
link |
02:45:23.920
and you drive towards the emergent, the physics and the biology, right? Opposite ends of the
link |
02:45:29.360
continuum. And, as Eric would be quick to point out if he was sitting here, I treat the emergent
link |
02:45:35.440
layer, I seek the fundamentals in it, which is sort of an echo of Eric's style of thinking,
link |
02:45:40.160
but applied to the very far complexity. He's overpoweringly argues for the importance of physics,
link |
02:45:50.240
the fundamental of the fundamental. He's not here to defend himself. Is there an argument to be made
link |
02:45:59.120
against that? Or biology? The emergent, the study of the thing that emerged when the fundamental
link |
02:46:07.760
acts at the universal, at the cosmic scale and builds the beautiful thing that is us,
link |
02:46:13.120
is much more important. Like, psychology, biology, the systems that we're actually
link |
02:46:20.560
interacting with in this human world are much more important to understand than low level
link |
02:46:28.400
theories of quantum mechanics and general relativity. Yeah, I can't say that one is more
link |
02:46:34.960
important. I think there's probably a different time scale. I think understanding the emergent
link |
02:46:40.240
layer is more often useful, but the bang for the buck at the far fundamental layer may be much
link |
02:46:47.760
greater. So, for example, the fourth frontier, I'm pretty sure it's going to have to be fusion
link |
02:46:54.480
powered. I don't think anything else will do it. But once you had fusion power, assuming we didn't
link |
02:46:59.520
just dump fusion power on the market the way we would be likely to if it was invented usefully
link |
02:47:04.320
tomorrow, but if we had fusion power and we had a little bit more wisdom than we have,
link |
02:47:10.800
you could do an awful lot. And that's not going to come from people like me who, you know, look
link |
02:47:16.560
at dynamics. Can I argue against that, please? I think the way to unlock fusion power is through
link |
02:47:26.160
artificial intelligence. So, I think most of the breakthrough ideas in the futures of science
link |
02:47:33.600
will be developed by AI systems. And I think in order to build intelligent AI systems,
link |
02:47:38.960
you have to be a scholar of the fundamental of the emergent of biology, of the neuroscience,
link |
02:47:47.360
of the way the brain works, of intelligence, of consciousness, and those things at least directly
link |
02:47:53.600
don't have anything to do with physics. Well, you're making me a little bit sad because my
link |
02:47:59.040
addiction to the aha moment thing is incompatible with outsourcing that job.
link |
02:48:07.360
I don't want to outsource that thing to the AI. Actually, I've seen this happen before,
link |
02:48:13.120
because some of the people who trained Heather and me were phylogenetic systematists,
link |
02:48:19.200
Arnold Kluge in particular. And the problem with systematics is that to do it right
link |
02:48:26.320
when your technology is primitive, you have to be deeply embedded in the philosophical
link |
02:48:31.920
and the logical, right? Your method has to be based in the highest level of rigor.
link |
02:48:40.080
Once you can sequence genes, genes can spit so much data at you that you can overwhelm
link |
02:48:45.600
high quality work with just lots and lots and lots of automated work. And so in any in some
link |
02:48:51.360
sense, there's a generation of phylogenetic systematists who are the last of the greats
link |
02:48:56.560
because what's replacing them is sequencers. So anyway, maybe you're right about the AI,
link |
02:49:02.960
and I guess I'm... Makes you sad. I like figuring stuff out.
link |
02:49:07.840
Is there something that you disagree with Eric on? They've been trying to convince them,
link |
02:49:12.960
you failed so far, but you will eventually succeed? You know, that is a very long list.
link |
02:49:20.480
Eric and I have tensions over certain things that recur all the time, and I'm trying to think
link |
02:49:27.680
what would be the ideal... Is it in the space of science, in the space of philosophy, politics,
link |
02:49:33.440
family, love, robots? Well, all right, let me... I'm just gonna use your podcast to
link |
02:49:43.200
make a bit of a cryptic war and just say there are many places in which I believe
link |
02:49:48.240
that I have butted heads with Eric over the course of decades, and I have seen him move in
link |
02:49:53.760
my direction substantially over time. You've been winning. He might win a battle here or there,
link |
02:49:59.280
but you've been winning the war. I would not say that. It's quite possible he could say the same
link |
02:50:03.920
thing about me, and in fact, I know that it's true. There are places where he's absolutely
link |
02:50:07.600
convinced me, but in any case, I do believe it's at least... It may not be a totally even fight,
link |
02:50:13.040
but it's more even than some will imagine. There are things I say that drive him nuts.
link |
02:50:23.200
When something... You heard me talk about the... What was it? It was the autopilot that seems to
link |
02:50:32.800
be putting a great many humans in needless medical jeopardy over the COVID 19 pandemic,
link |
02:50:39.200
and my feeling is we can say this almost for sure. Anytime you have the appearance of some
link |
02:50:48.080
captured gigantic entity that is censoring you on YouTube and handing down dictates from the
link |
02:50:55.200
who and all of that, it is sure that there will be a certain amount of collusion. There's gonna
link |
02:51:01.520
be some embarrassing emails in some places that are gonna reveal some shocking connections,
link |
02:51:05.920
and then there's gonna be an awful lot of emergence that didn't involve collusion,
link |
02:51:11.040
right, in which people were doing their little part of a job and something was emergent,
link |
02:51:14.320
and you never know what the admixture is. How much are we looking at actual collusion,
link |
02:51:19.440
and how much are we looking at an emergent process, but you should always walk in with
link |
02:51:23.120
the sense that it's gonna be a ratio, and the question is, what is the ratio in this case?
link |
02:51:27.520
I think this drives Eric Knott's because he is very focused on the people. I think he's focused
link |
02:51:33.280
on the people who have a choice and make the wrong one, and anyway, he may...
link |
02:51:38.480
The question of the ratio is the distraction to that.
link |
02:51:41.680
I think he takes it almost as an offense because it grants cover to people who are harming others,
link |
02:51:51.280
and I think it offends him morally, and if I had to say, I would say it alters his judgment on the
link |
02:52:00.400
matter, but anyway, certainly useful just to leave open the two possibilities and say it's a ratio,
link |
02:52:07.120
but we don't know which one. Brother to brother, do you love the guy?
link |
02:52:14.160
Hell yeah, hell yeah, and I'd love him if he was just my brother, but he's also awesome,
link |
02:52:19.120
so I love him and I love him for who he is. So let me ask you about, back to your book,
link |
02:52:25.760
Hunter Gatherer's Guide to the 21st Century. I can't wait, both for the book and the videos
link |
02:52:32.320
you do on the book, that's really exciting that there's a structured organized way to present this.
link |
02:52:39.600
A kind of, from an evolutionary biology perspective, a guide for the future,
link |
02:52:46.560
using our past as the fundamental of the emergent way to present a picture of the future. Let me
link |
02:52:56.240
ask you about something that I think about a little bit in this modern world, which is monogamy.
link |
02:53:07.520
So I personally value monogamy. One girl, ride or die.
link |
02:53:12.240
There you go. Ride or die, that's exactly it. But that said, I don't know the right way to approach
link |
02:53:22.160
this, but from an evolutionary biology perspective or from just looking at modern society,
link |
02:53:30.000
that seems to be an idea that's not, what's the right way to put it, flourishing.
link |
02:53:35.120
You're just waning. It's waning. So I suppose,
link |
02:53:42.880
based on your reaction, you're also a supporter of monogamy or the value of monogamy. Are you
link |
02:53:48.240
and I just delusional? What can you say about monogamy from the context of your book,
link |
02:53:58.080
from the context of evolutionary biology, from the context of being human?
link |
02:54:02.480
Yeah. I can say that I fully believe that we are actually enlightened and that although monogamy
link |
02:54:08.320
is waning, that it is not waning because there is a superior system. It is waning for predictable
link |
02:54:13.840
other reasons. So let us just say, there is a lot of pre trans fallacy here where
link |
02:54:23.440
people go through a phase where they recognize that actually we know a lot about the evolution
link |
02:54:29.200
of monogamy. And we can tell from the fact that humans are somewhat sexually dimorphic,
link |
02:54:36.560
that there has been a lot of polygyny in human history. And in fact, most of human history
link |
02:54:42.800
was largely polygynous. But it is also the case that most of the people on earth today belong
link |
02:54:50.160
to civilizations that are at least nominally monogamous and have practiced monogamy. And
link |
02:54:54.640
that is not anti evolutionary. What that is, is part of what I mentioned before where human
link |
02:55:01.680
beings can swap out their software program. And different mating patterns are favored
link |
02:55:09.760
in different periods of history. So I would argue that the benefit of monogamy, the primary one
link |
02:55:15.840
that drives the evolution of monogamous patterns in humans, is that it brings all adults into
link |
02:55:22.560
child rearing. Now the reason that that matters is because human babies are very labor intensive.
link |
02:55:29.680
In order to raise them properly having two parents is a huge asset and having more than two parents
link |
02:55:35.280
having an extended family also very important. But what that means is that for a population
link |
02:55:43.280
that is expanding, a monogamous mating system makes sense. It makes sense because it means
link |
02:55:49.360
that the number of offspring that can be raised is elevated. It's elevated because all potential
link |
02:55:55.680
parents are involved in parenting. Whereas if you sideline a bunch of males by having a polygynous
link |
02:56:00.960
system in which one male has many females, which is typically the way that works, what you do is
link |
02:56:05.680
you sideline all those males, which means the total amount of parental effort is lower. And
link |
02:56:10.080
the population can't grow. So what I'm arguing is that you should expect to see populations that
link |
02:56:17.920
face the possibility of expansion in Doris monogamy and at the point that they have reached
link |
02:56:22.880
carrying capacity, you should expect to see polygyny break back out. And what we are seeing
link |
02:56:27.920
is a kind of false sophistication around polyamory, which will end up breaking down into
link |
02:56:33.840
polygyny, which will not be in the interest of most people. Really the only people whose
link |
02:56:38.400
interest it could be argued to be in would be the very small number of males at the top who have
link |
02:56:44.560
many partners and everybody else suffers. Is it possible to make the argument, if we focus in on
link |
02:56:52.000
those males at the quote unquote top with many female partners, is it possible to say that
link |
02:56:59.360
that's a suboptimal life, that a single partner is the optimal life? Well, it depends what you
link |
02:57:05.840
mean. I have a feeling that you and I wouldn't have to go very far to figure out that what might
link |
02:57:12.720
be evolutionarily optimal doesn't match my values as a person and I'm sure it doesn't match yours
link |
02:57:18.480
either. Can we try to dig into that gap between those two? Sure. I mean, we can do it very simply.
link |
02:57:29.040
Selection might favor your engaging in war against a defenseless enemy or genocide.
link |
02:57:36.480
Right? It's not hard to figure out how that might put your genes at advantage. I don't know about
link |
02:57:43.600
you, Lex. I'm not getting involved in no genocide. It's not going to happen. I won't do it. I will
link |
02:57:48.560
do anything to avoid it. So some part of me has decided that my conscious self and the values
link |
02:57:55.120
that I hold trump my evolutionary self. And once you figure out that in some extreme case,
link |
02:58:02.960
that's true. And then you realize that that means it must be possible in many other cases and you
link |
02:58:07.600
start going through all of the things that selection would favor and you realize that a fair
link |
02:58:11.040
fraction of the time, actually, you're not up for this. You don't want to be some robot on a mission
link |
02:58:17.040
that involves genocide when necessary. You want to be your own person and accomplish things that
link |
02:58:22.000
you think are valuable. And so among those are not advocating, let's suppose you were in a position
link |
02:58:32.000
to be one of those males at the top of a polygynous system. We both know why that would be rewarding,
link |
02:58:37.200
right? But we also both recognize it. Do we? Yeah, sure. Lots of sex? Yeah. Okay. What else?
link |
02:58:43.600
Lots of sex and lots of variety, right? So look, every red blooded American slash Russian male
link |
02:58:51.360
could understand why that's appealing, right? On the other hand, it is up against an alternative,
link |
02:58:57.680
which is having a partner with whom one is bonded, especially closely, right?
link |
02:59:05.280
Right. And so A. Love. Right. Well, I don't want to straw man the polygyny position. Obviously,
link |
02:59:15.200
polygyny is complex and there's nothing that stops a man presumably from loving multiple partners and
link |
02:59:22.400
from them loving him back. But in terms of, if love is your thing, there's a question about,
link |
02:59:27.040
okay, what is the quality of love if it is divided over multiple partners, right? And what is the net
link |
02:59:34.400
consequence for love in a society when multiple people will be frozen out for every individual
link |
02:59:40.640
male in this case who has it? And what I would argue is, and you know, this is weird to even talk
link |
02:59:49.360
about, but this is partially me just talking for personal experience. I think there actually is
link |
02:59:54.000
a monogamy program in us and it's not automatic. But if you take it seriously, you can find it and
link |
03:00:03.520
frankly, marriage and it doesn't have to be marriage, but whatever it is that results in the
link |
03:00:07.760
lifelong bond with a partner has gotten a very bad rap, you know, it's the butt of too many jokes.
link |
03:00:14.240
But the truth is it's hugely rewarding. It's not easy. But if you know that you're looking for
link |
03:00:21.600
something, right? If you know that the objective actually exists and it's not some utopian fantasy
link |
03:00:25.680
that can't be found, if you know that there's some real world, you know, warts and all version of it,
link |
03:00:32.320
then you might actually think, hey, that is something I want and you might pursue it. And my
link |
03:00:36.080
guess is you'd be very happy when you find it. Yeah, I think there is getting to the fundamentals
link |
03:00:40.800
of the emergent. I feel like there is some kind of physics of love. So one, there's a conservation
link |
03:00:46.400
thing going on. So if you have like many partners, yeah, in theory, you should be able to love all
link |
03:00:53.120
of them deeply. But it seems like in reality, that love gets split. Yep. Now, there's another
link |
03:00:59.920
law that's interesting in terms of monogamy. I don't know if it's at the physics level, but
link |
03:01:05.280
if you are in a monogamous relationship by choice, and almost as in slight rebellion
link |
03:01:13.280
to social norms, that's much more powerful. Like if you choose that one partnership,
link |
03:01:20.800
that's also more powerful. If like everybody's in a monogamy, there's this pressure to be
link |
03:01:25.680
married in this pressure society, that's different because that's almost like a constraint on your
link |
03:01:30.960
freedom that is enforced by something other than your own ideals. It's by somebody else.
link |
03:01:37.600
When you yourself choose to, I guess, create these constraints, that enriches that love.
link |
03:01:44.800
So there's some kind of love function, like E equals MC squared, but for love, that I feel like
link |
03:01:51.120
if you have less partners and it's done by choice, they can maximize that. And that love can transcend
link |
03:01:58.480
the biology, transcend the evolutionary biology forces that have to do much more with survival
link |
03:02:06.080
and all those kinds of things. It can transcend to take us to a richer experience, which we have
link |
03:02:12.160
the luxury of having, exploring, of happiness, of joy, of fulfillment, all those kinds of things.
link |
03:02:19.280
Totally agree with this. And there's no question that by choice, when there are other choices,
link |
03:02:26.640
imbues it with meaning that it might not otherwise have. I would also say, I'm really
link |
03:02:34.400
struck by, and I have a hard time not feeling terrible sadness over what younger people are
link |
03:02:43.920
coming to think about this topic. I think they're missing something so important and so hard to
link |
03:02:50.240
phrase that, and they don't even know that they're missing it. They might know that they're unhappy,
link |
03:02:55.760
but they don't understand what it is they're even looking for because nobody's really been
link |
03:02:59.680
honest with them about what their choices are. And I have to say, if I was a young person or if I
link |
03:03:05.440
was advising a young person, which I used to do, again, a million years ago when I was a college
link |
03:03:09.760
professor four years ago, but I used to talk to students, I knew my students really well,
link |
03:03:14.880
and they would ask questions about this. And they were always curious because Heather and I
link |
03:03:18.320
seemed to have a good relationship and many of them knew both of us. So they would talk to us
link |
03:03:22.800
about this. If I was advising somebody, I would say, do not bypass the possibility that what you
link |
03:03:31.280
are supposed to do is find somebody worthy, somebody who can handle it, somebody who you are
link |
03:03:38.720
compatible with, and that you don't have to be perfectly compatible. It's not about dating until
link |
03:03:43.120
you find the one. It's about finding somebody who's underlying values and viewpoint are complementary
link |
03:03:50.480
to yours, sufficient that you fall in love. If you find that person, opt out together. Get out
link |
03:03:59.040
of this damn system that's telling you what's sophisticated to think about love and romance
link |
03:04:03.600
and sex, ignore it together, right? That's the key. And I believe you'll end up laughing in the end
link |
03:04:11.440
if you do it, you'll discover, wow, that's a hellscape that I opted out of. And this thing I
link |
03:04:18.240
opted into, complicated, difficult, worth it. Nothing that's worth it is ever not difficult.
link |
03:04:25.680
So we should even just skip the whole statement about difficult. Yeah. All right. I want to be
link |
03:04:31.760
honest. It's not like, oh, it's nonstop joy. No, it's fricking complex, but worth it? No question
link |
03:04:39.600
in my mind. Is there advice outside of love that you can give to young people? You were a million
link |
03:04:46.160
years ago, a professor. Is there advice you can give to young people, high schoolers, college
link |
03:04:52.560
students about career, about life? Yeah, but they're not going to like it because it's not easy to
link |
03:04:59.520
operationalize. And this was a problem when I was a college professor too. People would ask me
link |
03:05:04.000
what they should do. Should they go to graduate school? I had almost nothing useful to say
link |
03:05:08.240
because the job market and the market of pre job training and all of that, these things are all
link |
03:05:16.000
so distorted and corrupt that I didn't want to point anybody to anything, right? Because it's
link |
03:05:23.760
all broken. And I would tell them that. But I would say that results in a kind of meta level
link |
03:05:30.720
advice that I do think is useful. You don't know what's coming. You don't know where the opportunities
link |
03:05:37.040
will be. You should invest in tools rather than knowledge, right? To the extent that you can
link |
03:05:43.600
do things, you can repurpose that no matter what the future brings to the extent that, you know,
link |
03:05:49.040
if you as a robot guy, right, you've got the skills of a robot guy. Now, if civilization
link |
03:05:56.160
failed and the stuff of robot building disappeared with it, you'd still have the mind of a robot guy
link |
03:06:02.480
and the mind of a robot guy can retool around all kinds of things, whether you're, you know,
link |
03:06:07.280
forced to work with, you know, fibers that are made into ropes, right? Your mechanical
link |
03:06:13.920
mind would be useful in all kinds of places. So invest in tools like that that can be easily
link |
03:06:18.480
repurposed and invest in combinations of tools, right? If civilization keeps limping along,
link |
03:06:28.800
you're going to be up against all sorts of people who have studied the things that you
link |
03:06:32.400
studied, right? If you think, hey, computer programming is really, really cool. And you
link |
03:06:36.240
pick up computer programming, guess what? You just entered a large group of people who have that skill
link |
03:06:41.280
and many of them will be better than you, almost certainly. On the other hand, if you combine
link |
03:06:46.880
that with something else that's very rarely combined with it, if you have, I don't know,
link |
03:06:52.720
if it's carpentry and computer programming, if you take combinations of things that are,
link |
03:06:58.880
even if they're both common, but they're not commonly found together,
link |
03:07:02.080
then those combinations create a rarefied space where you inhabit it. And even if the things
link |
03:07:08.240
don't even really touch, but nonetheless, they create a mind in which the two things are live
link |
03:07:13.600
and you can move back and forth between them and, you know, step out of your own perspective by
link |
03:07:18.400
moving from one to the other, that will increase what you can see and the quality of your tools.
link |
03:07:24.320
And so anyway, that isn't useful advice. It doesn't tell you whether you should go to graduate
link |
03:07:27.920
school or not, but it does tell you the one thing we can say for certain about the future is that
link |
03:07:33.680
it's uncertain and so prepare for it. And like you said, there's cool things to be discovered
link |
03:07:38.800
in the intersection of fields and ideas. And I would look at grad school that way, actually,
link |
03:07:45.200
if you do go. Or I see, I mean, this is such a, like every course in grad school, undergrad too,
link |
03:07:54.480
was like this little journey that you're on that explores a particular field. And it's not
link |
03:08:01.040
immediately obvious how useful it is, but it allows you to discover intersections between that thing
link |
03:08:09.360
and some other thing. So you're bringing to the table this, these pieces of knowledge,
link |
03:08:16.480
some of which when intersected might create a niche that's completely novel, unique and will
link |
03:08:22.400
bring you joy out of that. I mean, I took a huge number of courses in theoretical computer science.
link |
03:08:28.000
Most of them seem useless, but they totally changed the way I see the world in ways that are,
link |
03:08:33.840
I'm not prepared or is a little bit difficult to kind of make explicit, but
link |
03:08:40.880
taken together, they've allowed me to see, for example, the world of robotics totally different
link |
03:08:48.240
and different from many of my colleagues and friends and so on. And I think that's a good
link |
03:08:53.680
way to see if you go to grad school as an opportunity to explore intersections of fields,
link |
03:09:01.840
even if the individual fields seem useless. Yeah. And useless doesn't mean useless, right?
link |
03:09:07.040
Useless means not directly applicable, but a good useless course can be the best one you ever took.
link |
03:09:12.560
Right. Yeah. I took James Joyce course on James Joyce and that was truly useless.
link |
03:09:21.120
Well, I took, I took immunobiology in the medical school when I was at Penn as,
link |
03:09:28.720
I guess I would have been a freshman or a sophomore. I wasn't supposed to be in this class.
link |
03:09:32.960
It blew my goddamn mind and it still does, right? I mean, we had this, I don't even know who it was,
link |
03:09:39.520
but we had this great professor who was like a highly placed in the world of immunobiology.
link |
03:09:44.000
You know, the course is called immunobiology, not immunology. Immunobiology, it had the right
link |
03:09:49.680
focus. And as I recall it, the professor stood sideways to the chalkboard staring off into space,
link |
03:09:57.200
literally stroking his beard with this bemused look on his face through the entire lecture.
link |
03:10:04.160
And, you know, you had all these medical students who were so furiously writing notes
link |
03:10:07.040
that I don't even think they were noticing the person delivering this thing. But,
link |
03:10:10.560
you know, I got what this guy was smiling about. It was like so, what he was describing,
link |
03:10:15.600
you know, adaptive immunity is so marvelous, right? That it was like almost a privilege
link |
03:10:20.080
to even be saying it to a room full of people who were listening, you know? But anyway,
link |
03:10:24.320
yeah, I took that course and, you know, lo and behold, COVID. That's not going to be useful.
link |
03:10:28.800
Well, yeah, suddenly it's front and center. And wow, am I glad I took it. But anyway, yeah,
link |
03:10:35.920
useless courses are great. And actually, Eric gave me one of the greater pieces of advice,
link |
03:10:40.880
at least for college that anyone's ever given, which was don't worry about the prerex. Take it
link |
03:10:46.320
anyway, right? But now I don't even know if kids can do this now because the prerex are now enforced
link |
03:10:51.760
by a computer. But back in the day, if you didn't mention that you didn't have the prerex, nobody
link |
03:10:58.240
stopped you from taking the course. And what he told me, which I didn't know was that often the
link |
03:11:02.880
advanced courses are easier in some way. The material's complex. But, you know, it's not like
link |
03:11:10.880
intro bio where you're learning a thousand things at once, right? It's like focused on
link |
03:11:15.520
something. So if you dedicate yourself, you can, you can pull it off.
link |
03:11:18.640
Yeah, stay with an idea for many weeks at a time. And it's ultimately rewarding,
link |
03:11:22.720
and not as difficult as it looks. Can I ask you a ridiculous question?
link |
03:11:27.120
Please. What do you think is the meaning of life? Well,
link |
03:11:36.560
I feel terrible having to give you the answer. I realize you asked the question,
link |
03:11:40.160
but if I tell you, you're going to again feel bad. I don't want to do that. But
link |
03:11:44.720
look, there's two, there can be a disappointment is no, it's going to be a horror, right?
link |
03:11:50.240
Because we actually know the answer to the question. Oh, no.
link |
03:11:52.960
So it's completely meaningless. There is nothing that we can do that escapes the heat death of
link |
03:12:00.160
the universe or whatever it is that happens at the end. And we're not going to make it there
link |
03:12:04.320
anyway. But even if you were optimistic about our ability to escape every existential hazard
link |
03:12:12.240
indefinitely, ultimately, it's all for nothing. We know it, right?
link |
03:12:15.600
Right. That said, once you stare into that abyss, and then it stares back and laughs or
link |
03:12:22.560
whatever happens, right? Then the question is, okay, given that, can I relax a little bit,
link |
03:12:28.880
right? And figure out, well, what would make sense if that were true, right? And I think
link |
03:12:34.800
there's something very clear to me. I think if you do all of the, you know, if I just take the
link |
03:12:39.680
values that I'm sure we share and extrapolate from them, I think the following thing is actually
link |
03:12:45.440
a moral imperative. Being a human and having opportunity is absolutely fucking awesome,
link |
03:12:53.760
right? A lot of people don't make use of the opportunity and a lot of people don't have
link |
03:12:56.960
opportunity, right? They get to be human, but they're too constrained by keeping a roof over
link |
03:13:01.520
their heads to really be free. But being a free human is fantastic. And being a free human on
link |
03:13:08.800
this beautiful planet, crippled as it may be, is unparalleled. I mean, what could be better? How
link |
03:13:15.440
lucky are we that we get that, right? So if that's true, that it is awesome to be human and to be
link |
03:13:21.680
free, then surely it is our obligation to deliver that opportunity to as many people as we can.
link |
03:13:28.960
And how do you do that? Well, I think I know what job one is. Job one is we have to get sustainable.
link |
03:13:35.920
The way to get the maximum number of humans to have that opportunity to be both here and free
link |
03:13:41.920
is to make sure that there isn't a limit on how long we can keep doing this. That effectively
link |
03:13:46.800
requires us to reach sustainability. And then at sustainability, you could have a horror show of
link |
03:13:53.760
sustainability, right? You could have a totalitarian sustainability. That's not the objective. The
link |
03:13:59.680
objective is to liberate people. And so the question, the whole fourth frontier question,
link |
03:14:04.320
frankly, is how do you get to a sustainable and indefinitely sustainable state
link |
03:14:10.560
in which people feel liberated, in which they are liberated to pursue the things that actually
link |
03:14:15.680
matter, to pursue beauty, truth, compassion, connection, all of those things that we could
link |
03:14:24.160
list as unalloyed goods. Those are the things that people should be most liberated to do in
link |
03:14:29.680
a system that really functions. And anyway, my point is, I don't know how precise that
link |
03:14:35.920
calculation is, but I'm pretty sure it's not wrong. It's accurate enough. And if it is accurate
link |
03:14:41.360
enough, then the point is, okay, well, there's no ultimate meaning, but the proximate meaning
link |
03:14:46.400
is that one. How many people can we get to have this wonderful experience that we've gotten to
link |
03:14:50.880
have, right? And there's no way that's so wrong that if I invest my life in it, that I'm making
link |
03:14:57.120
some big error for that. Life is awesome. And we want to spread the awesome as much as possible.
link |
03:15:03.360
Yeah, you sum it up that way, spread the awesome, spread the awesome. So that's the fourth frontier.
link |
03:15:07.760
And if that fails, if the fourth frontier fails, the fifth frontier will be defined by robots.
link |
03:15:12.800
And hopefully they'll learn the lessons of the mistakes that the humans made and build a better
link |
03:15:19.040
world. I hope they're very happy here and that they do a better job with the place than we did.
link |
03:15:23.120
But I can't believe it took us this long to talk. As I mentioned to you before, that we haven't
link |
03:15:32.560
actually spoken, I think at all. And I've always felt that we're already friends. I don't know how
link |
03:15:39.680
that works, because I've listened to your podcast a lot. I've also sort of loved your brother. And
link |
03:15:46.080
so it was like we've known each other for the longest time. And I hope we can be friends and
link |
03:15:52.240
talk often again. And I hope that you get a chance to meet some of my robot friends as well
link |
03:15:57.760
and fall in love. And I'm so glad that you love robots as well. So we get to share in that love.
link |
03:16:04.000
So I can't wait for us to interact together. So we went from talking about some of the worst
link |
03:16:10.640
failures of humanity to some of the most beautiful aspects of humanity. What else can you ask for
link |
03:16:17.840
from a conversation? Thank you so much for talking to me. You know, Alex, I feel the same way towards
link |
03:16:22.800
you. And I really appreciate it. This has been a lot of fun. And I'm looking forward to our next one.
link |
03:16:27.680
Thanks for listening to this conversation with Brett Weinstein. And thank you to
link |
03:16:31.200
Jordan Harbinger's show, ExpressVPN, Magic Spoon and ForSigmatic. Check them out in the
link |
03:16:37.200
description to support this podcast. And now, let me leave you with some words from Charles Darwin.
link |
03:16:42.720
Ignorance more frequently begets confidence than does knowledge. It is those who know little,
link |
03:16:49.600
not those who know much, who so positively assert that this or that problem will never be solved
link |
03:16:55.840
by science. Thank you for listening and hope to see you next time.