back to index

Ray Kurzweil: Singularity, Superintelligence, and Immortality | Lex Fridman Podcast #321


small model | large model

link |
00:00:00.000
By the time we get to 2045,
link |
00:00:02.800
we'll be able to multiply our intelligence
link |
00:00:05.320
many millions full.
link |
00:00:07.680
And it's just very hard to imagine what that will be like.
link |
00:00:13.560
The following is a conversation with Ray Kurzweil,
link |
00:00:16.840
author, inventor, and futurist,
link |
00:00:19.480
who has an optimistic view of our future
link |
00:00:22.280
as a human civilization,
link |
00:00:24.320
predicting that exponentially improving technologies
link |
00:00:27.280
will take us to a point of a singularity,
link |
00:00:29.880
beyond which superintelligent, artificial intelligence
link |
00:00:33.480
will transform our world in nearly unimaginable ways.
link |
00:00:38.400
18 years ago, in the book Singularity is Near,
link |
00:00:41.320
he predicted that the onset of the singularity
link |
00:00:44.000
will happen in the year 2045.
link |
00:00:47.360
He still holds to this prediction and estimate.
link |
00:00:50.800
In fact, he's working on a new book on this topic
link |
00:00:53.440
that will hopefully be out next year.
link |
00:00:56.520
This is the Lex Friedman podcast.
link |
00:00:58.280
To support it, please check out our sponsors
link |
00:01:00.360
in the description.
link |
00:01:01.640
And now, dear friends, here's Ray Kurzweil.
link |
00:01:06.360
In your 2005 book titled The Singularity is Near,
link |
00:01:10.960
you predicted that the singularity will happen in 2045.
link |
00:01:15.400
So now 18 years later,
link |
00:01:17.640
do you still estimate that the singularity will happen
link |
00:01:20.880
on 2045?
link |
00:01:22.480
And maybe first, what is the singularity,
link |
00:01:24.960
the technological singularity, and when will it happen?
link |
00:01:27.760
Singularity is where computers really change our view
link |
00:01:31.640
of what's important and change who we are.
link |
00:01:35.840
But we're getting close to some salient things
link |
00:01:39.560
that will change who we are.
link |
00:01:42.840
The key thing is 2029,
link |
00:01:45.680
when computers will pass the Turing test.
link |
00:01:50.120
And there's also some controversy
link |
00:01:51.560
whether the Turing test is valid, I believe it is.
link |
00:01:55.080
Most people do believe that,
link |
00:01:57.920
but there's some controversy about that.
link |
00:01:59.680
But Stanford got very alarmed at my prediction about 2029.
link |
00:02:06.520
I made this in 1999, in my book.
link |
00:02:10.520
The Age of Spiritual Machines.
link |
00:02:12.440
And then you repeated the prediction in 2005.
link |
00:02:15.480
In 2005.
link |
00:02:16.600
Yeah.
link |
00:02:17.520
So they held an international conference,
link |
00:02:19.480
you might've been aware of it,
link |
00:02:21.160
of AI experts in 1999
link |
00:02:23.840
to assess this view.
link |
00:02:26.600
So people gave different predictions and they took a poll.
link |
00:02:30.840
It was really the first time
link |
00:02:32.080
that AI experts worldwide were polled on this prediction.
link |
00:02:37.720
And the average poll was 100 years.
link |
00:02:41.400
20% believed it would never happen.
link |
00:02:44.320
And that was the view in 1999.
link |
00:02:48.120
80% believed it would happen,
link |
00:02:50.640
but not within their lifetimes.
link |
00:02:53.200
There's been so many advances in AI
link |
00:02:56.920
that the poll of AI experts has come down over the years.
link |
00:03:01.840
So a year ago, something called meticulous,
link |
00:03:05.400
which you may be aware of,
link |
00:03:07.120
assessed as different types of experts on the future.
link |
00:03:11.560
They again assessed what AI experts then felt.
link |
00:03:16.440
And they were saying 2042.
link |
00:03:18.960
For the Turing test.
link |
00:03:20.440
For the Turing test.
link |
00:03:22.440
Yes, that's coming down.
link |
00:03:23.640
And I was still saying 2029.
link |
00:03:26.360
A few weeks ago, they again did another poll
link |
00:03:30.240
and it was 2030.
link |
00:03:33.000
So AI experts now basically agree with me.
link |
00:03:37.960
I haven't changed at all.
link |
00:03:39.280
I've stayed with 2029.
link |
00:03:42.880
And AI experts now agree with me,
link |
00:03:44.560
but they didn't agree at first.
link |
00:03:46.880
So Alan Turing formulated the Turing test and...
link |
00:03:51.000
Right, now what he said was very little about it.
link |
00:03:54.480
I mean, the 1950 paper where he had articulated
link |
00:03:57.120
the Turing test,
link |
00:03:58.600
he just like a few lines that talk about the Turing test.
link |
00:04:06.880
And it really wasn't very clear how to administer it.
link |
00:04:12.040
And he said if they did it in like 15 minutes,
link |
00:04:16.520
that would be sufficient,
link |
00:04:17.680
which I don't really think is the case.
link |
00:04:20.680
These large language models now,
link |
00:04:22.920
some people are convinced by it already.
link |
00:04:25.520
I mean, you can talk to it and have a conversation with you.
link |
00:04:28.440
You can actually talk to it for hours.
link |
00:04:31.720
So it requires a little more depth.
link |
00:04:35.320
There's some problems with large language models,
link |
00:04:38.080
which we can talk about.
link |
00:04:41.800
But some people are convinced by the Turing test.
link |
00:04:46.400
Now, if somebody passes the Turing test,
link |
00:04:50.120
what are the implications of that?
link |
00:04:52.120
Does that mean that they're sentient,
link |
00:04:53.720
that they're conscious or not?
link |
00:04:55.960
It's not necessarily clear what the implications are.
link |
00:05:00.840
Anyway, I believe 2029, that's six, seven years from now,
link |
00:05:07.640
we'll have something that passes the Turing test
link |
00:05:10.320
and a valid Turing test,
link |
00:05:12.440
meaning it goes for hours, not just a few minutes.
link |
00:05:15.280
Can you speak to that a little bit?
link |
00:05:16.560
What is your formulation of the Turing test?
link |
00:05:21.120
You've proposed a very difficult version
link |
00:05:23.160
of the Turing test, so what does that look like?
link |
00:05:25.400
Basically, it's just to assess it over several hours
link |
00:05:30.760
and also have a human judge that's fairly sophisticated
link |
00:05:36.440
on what computers can do and can't do.
link |
00:05:40.760
If you take somebody who's not that sophisticated
link |
00:05:43.800
or even an average engineer,
link |
00:05:48.360
they may not really assess various aspects of it.
link |
00:05:52.080
So you really want the human to challenge the system?
link |
00:05:55.680
Exactly, exactly.
link |
00:05:57.040
On its ability to do things like
link |
00:05:58.640
common sense reasoning, perhaps?
link |
00:06:00.800
That's actually a key problem with large language models.
link |
00:06:04.680
They don't do these kinds of tests
link |
00:06:10.160
that would involve assessing chains of reasoning.
link |
00:06:17.400
But you can lose track of that.
link |
00:06:18.960
If you talk to them, they actually can talk to you
link |
00:06:21.480
pretty well and you can be convinced by it.
link |
00:06:24.840
But it's somebody that would really convince you
link |
00:06:27.400
that it's a human, whatever that takes.
link |
00:06:31.960
Maybe it would take days or weeks,
link |
00:06:34.800
but it would really convince you that it's human.
link |
00:06:40.880
Large language models can appear that way.
link |
00:06:45.320
You can read conversations and they appear pretty good.
link |
00:06:49.760
There are some problems with it.
link |
00:06:52.240
It doesn't do math very well.
link |
00:06:55.000
You can ask how many legs did 10 elephants have
link |
00:06:58.160
and they'll tell you, well, okay,
link |
00:07:00.040
each elephant has four legs and it's 10 elephants,
link |
00:07:02.360
so it's 40 legs.
link |
00:07:03.720
And you go, okay, that's pretty good.
link |
00:07:05.840
How many legs do 11 elephants have?
link |
00:07:07.920
And they don't seem to understand the question.
link |
00:07:11.480
Do all humans understand that question?
link |
00:07:14.120
No, that's the key thing.
link |
00:07:15.840
I mean, how advanced the human do you want it to be?
link |
00:07:19.400
But we do expect a human to be able
link |
00:07:21.800
to do multi chain reasoning,
link |
00:07:24.800
to be able to take a few facts and put them together.
link |
00:07:28.280
Not perfectly.
link |
00:07:29.800
And we see that in a lot of polls
link |
00:07:32.760
that people don't do that perfectly at all, but.
link |
00:07:39.200
So it's not very well defined,
link |
00:07:42.000
but it's something where it really would convince you
link |
00:07:44.280
that it's a human.
link |
00:07:45.600
Is your intuition that large language models
link |
00:07:48.840
will not be solely the kind of system
link |
00:07:52.320
that passes the Turing test in 2029?
link |
00:07:55.560
Do we need something else?
link |
00:07:56.760
No, I think it will be a large language model,
link |
00:07:58.680
but they have to go beyond what they're doing now.
link |
00:08:02.920
I think we're getting there.
link |
00:08:05.720
And another key issue is if somebody
link |
00:08:09.200
actually passes the Turing test validly,
link |
00:08:12.160
I would believe they're conscious.
link |
00:08:13.600
And then not everybody would say that.
link |
00:08:14.960
It's okay, we can pass the Turing test,
link |
00:08:17.400
but we don't really believe that it's conscious.
link |
00:08:20.040
That's a whole nother issue.
link |
00:08:23.080
But if it really passes the Turing test,
link |
00:08:24.880
I would believe that it's conscious.
link |
00:08:26.680
But I don't believe that of large language models today.
link |
00:08:32.760
If it appears to be conscious,
link |
00:08:35.520
that's as good as being conscious, at least for you,
link |
00:08:38.240
in some sense.
link |
00:08:40.720
I mean, consciousness is not something that's scientific.
link |
00:08:46.640
I mean, I believe you're conscious,
link |
00:08:49.760
but it's really just a belief.
link |
00:08:51.120
And we believe that about other humans
link |
00:08:52.800
that at least appear to be conscious.
link |
00:08:57.400
When you go outside of shared human assumption,
link |
00:09:01.720
like our animals conscious,
link |
00:09:04.520
some people believe they're not conscious,
link |
00:09:06.200
some people believe they are conscious,
link |
00:09:08.680
and would a machine that acts just like a human be conscious?
link |
00:09:14.520
I mean, I believe it would be,
link |
00:09:17.040
but that's really a philosophical belief.
link |
00:09:20.720
It's not, you can't prove it.
link |
00:09:22.680
I can't take an entity and prove that it's conscious.
link |
00:09:25.440
There's nothing that you can do that would indicate that.
link |
00:09:30.320
It's like saying a piece of art is beautiful.
link |
00:09:32.760
You can say it, multiple people can experience
link |
00:09:36.600
a piece of art is beautiful, but you can't prove it.
link |
00:09:41.280
But it's also an extremely important issue.
link |
00:09:44.800
I mean, imagine if you had something
link |
00:09:47.000
with nobody's conscious, the world may as well not exist.
link |
00:09:55.640
And so some people, like say, Marvin Rinsky,
link |
00:10:02.600
said, well, consciousness is not logical,
link |
00:10:05.920
it's not scientific, and therefore we should dismiss it.
link |
00:10:08.360
And any talk about consciousness is just not to be believed.
link |
00:10:13.360
But when he actually engaged with somebody who was conscious,
link |
00:10:17.360
he actually acted as if they were conscious.
link |
00:10:20.360
He didn't ignore that.
link |
00:10:22.200
He acted as if consciousness does matter.
link |
00:10:24.760
Exactly. Whereas he said it didn't matter.
link |
00:10:27.960
Well, that's Marvin Rinsky.
link |
00:10:29.800
Yeah.
link |
00:10:30.640
He's full of contradictions.
link |
00:10:32.000
But that's true of a lot of people as well.
link |
00:10:35.200
But to you, consciousness matters.
link |
00:10:37.440
But to me, it's very important,
link |
00:10:39.720
but I would say it's not a scientific issue.
link |
00:10:45.480
It's a philosophical issue.
link |
00:10:47.320
And people have different views.
link |
00:10:48.720
And some people believe that anything
link |
00:10:50.520
that makes a decision is conscious.
link |
00:10:52.520
So your light switch is conscious.
link |
00:10:54.520
It's the level of consciousness that's low.
link |
00:10:57.120
It's not very interesting, but that's a consciousness.
link |
00:11:02.120
And anything, so a computer that makes
link |
00:11:04.920
a more interesting decision, still not a human being.
link |
00:11:08.520
Still not at human levels, but it's also conscious
link |
00:11:11.320
and at a higher level than your light switch.
link |
00:11:13.720
So that's one view.
link |
00:11:15.720
There's many different views of what consciousness is.
link |
00:11:19.720
So if a system passes the Turing test,
link |
00:11:22.720
it's not scientific.
link |
00:11:25.720
But in the issues of philosophy,
link |
00:11:29.720
things like ethics start to enter the picture.
link |
00:11:31.720
Do you think there would be...
link |
00:11:34.720
We would start contending as a human species
link |
00:11:38.920
about the ethics of turning off such a machine.
link |
00:11:42.920
Yeah, I mean, that's definitely come up.
link |
00:11:46.920
It hasn't come up in reality yet.
link |
00:11:48.920
But I'm talking about 2029.
link |
00:11:51.920
It's not that many years from now.
link |
00:11:54.920
And so what are our obligations to it?
link |
00:11:58.920
It has a different...
link |
00:12:00.920
I mean, a computer that's conscious
link |
00:12:02.920
has a little bit different connotations than a human.
link |
00:12:11.920
We have a continuous consciousness.
link |
00:12:14.920
We're in an entity that does not last forever.
link |
00:12:20.920
Now, actually, a significant portion of humans still exist
link |
00:12:26.920
and are therefore still conscious.
link |
00:12:30.920
But anybody who is over a certain age doesn't exist anymore.
link |
00:12:36.920
That wouldn't be true of a computer program.
link |
00:12:39.920
You could completely turn it off
link |
00:12:41.920
and a copy of it could be stored and you could recreate it.
link |
00:12:45.920
And so it has a different type of validity.
link |
00:12:50.920
You could actually take it back in time.
link |
00:12:52.920
You could eliminate its memory and have it go over again.
link |
00:12:55.920
I mean, it has a different kind of connotation
link |
00:12:59.920
than humans do.
link |
00:13:01.920
Well, perhaps you can do the same thing with humans.
link |
00:13:03.920
It's just that we don't know how to do that yet.
link |
00:13:06.920
It's possible that we figure out all of these things on the machine first.
link |
00:13:11.920
But that doesn't mean the machine isn't conscious.
link |
00:13:14.920
I mean, if you look at the way people react,
link |
00:13:17.920
say three CPO or other machines that are conscious in movies,
link |
00:13:24.920
they don't actually present how it's conscious.
link |
00:13:26.920
We see that they are a machine
link |
00:13:29.920
and people will believe that they are conscious
link |
00:13:32.920
and they'll actually worry about it if they get in trouble and so on.
link |
00:13:36.920
So, 2029 is going to be the first year when a major thing happens.
link |
00:13:42.920
Right.
link |
00:13:43.920
And that will shake our civilization to start to consider the role of AI.
link |
00:13:49.920
Yes and no.
link |
00:13:50.920
I mean, this one guy at Google claimed that the machine was conscious.
link |
00:13:57.920
That's just one person.
link |
00:13:59.920
Right.
link |
00:14:00.920
When it starts to happen to scale.
link |
00:14:02.920
Well, that's exactly right because most people have not taken that position.
link |
00:14:07.920
I don't take that position.
link |
00:14:08.920
I mean, I've used different things like this
link |
00:14:16.920
and they don't appear to me to be conscious.
link |
00:14:19.920
As we eliminate various problems of these large language models,
link |
00:14:25.920
more and more people will accept that they're conscious.
link |
00:14:29.920
So, when we get to 2029,
link |
00:14:31.920
I think a large fraction of people will believe that they're conscious.
link |
00:14:37.920
So, it's not going to happen all at once.
link |
00:14:40.920
I believe that what actually happened gradually
link |
00:14:43.920
and it's already started to happen.
link |
00:14:46.920
And so, that takes us one step closer to the singularity.
link |
00:14:51.920
Another step then is in the 2030s when we can actually connect our neocortex,
link |
00:14:59.920
which is where we do our thinking, to computers.
link |
00:15:04.920
And I mean, just as this actually gains a lot to being connected to computers
link |
00:15:11.920
that will amplify its abilities.
link |
00:15:14.920
I mean, if this did not have any connection, it would be pretty stupid.
link |
00:15:18.920
It could not answer any of your questions.
link |
00:15:21.920
If you're just listening to this, by the way,
link |
00:15:23.920
we're just holding up the all powerful smartphone.
link |
00:15:29.920
So, we're going to do that directly from our brains.
link |
00:15:32.920
I mean, these are pretty good.
link |
00:15:34.920
These already have amplified our intelligence.
link |
00:15:36.920
I'm already much smarter than I would otherwise be if I didn't have this.
link |
00:15:41.920
Because I remember my first book, The Age of Intelligent Machines,
link |
00:15:48.920
there was no way to get information from computers.
link |
00:15:51.920
I actually would go to a library, find a book,
link |
00:15:54.920
find the page that had an information I wanted,
link |
00:15:57.920
and I'd go to the copier and my most significant information tool
link |
00:16:03.920
was a roll of quarters where I could see the copier.
link |
00:16:07.920
So, we're already greatly advanced that we have these things.
link |
00:16:12.920
There's a few problems with it.
link |
00:16:14.920
First of all, I constantly put it down, and I don't remember where I put it.
link |
00:16:18.920
I've actually never lost it, but you have to find it,
link |
00:16:23.920
and then you have to turn it on.
link |
00:16:25.920
So, there's a certain amount of steps.
link |
00:16:27.920
It would actually be quite useful if someone would just listen to your conversation
link |
00:16:32.920
and say, oh, that's so and so actress and tell you what you're talking about.
link |
00:16:40.920
So, going from active to passive where it just permeates your whole life.
link |
00:16:45.920
Yeah, exactly.
link |
00:16:46.920
The way your brain does when you're awake.
link |
00:16:48.920
Your brain is always there.
link |
00:16:50.920
Right.
link |
00:16:51.920
That's something that could actually just about be done today
link |
00:16:55.920
where you would listen to your conversation, understand what you're saying,
link |
00:16:58.920
understand what you're not missing and give you that information.
link |
00:17:03.920
But another step is to actually go inside your brain.
link |
00:17:06.920
Yeah.
link |
00:17:07.920
And there are some prototypes where you can connect your brain.
link |
00:17:14.920
They actually don't have the amount of bandwidth that we need.
link |
00:17:18.920
They can work, but they work fairly slowly.
link |
00:17:21.920
So, if it actually would connect to your neocortex,
link |
00:17:26.920
and the neocortex which I described and how to create a mind,
link |
00:17:31.920
the neocortex is actually, it has different levels.
link |
00:17:37.920
And as you go up the levels, it's kind of like a pyramid.
link |
00:17:41.920
The top level is fairly small.
link |
00:17:43.920
And that's the level where you want to connect these brain extenders.
link |
00:17:53.920
So, I believe that will happen in the 2030s.
link |
00:17:55.920
So, just the way this is greatly amplified by being connected to the cloud,
link |
00:18:03.920
we can connect our own brain to the cloud and just do what we can do by using this machine.
link |
00:18:13.920
Do you think it would look like the brain computer interface of like Neuralink?
link |
00:18:18.920
Well, Neuralink is an attempt to do that.
link |
00:18:21.920
It doesn't have the bandwidth that we need yet.
link |
00:18:26.920
Right.
link |
00:18:27.920
But I think, I mean, they're going to get permission for this
link |
00:18:31.920
because there are a lot of people who absolutely need it
link |
00:18:34.920
because they can't communicate.
link |
00:18:36.920
I know a couple of people like that who have ideas and they cannot,
link |
00:18:41.920
they cannot move their muscles and so on, they can't communicate.
link |
00:18:46.920
So, for them, this would be very valuable.
link |
00:18:51.920
But we could all use it.
link |
00:18:54.920
Basically, it would turn us into something that would be like we have a phone,
link |
00:19:02.920
but it would be in our minds, it would be kind of instantaneous.
link |
00:19:06.920
And maybe communication between two people would not require this low bandwidth mechanism of language.
link |
00:19:13.920
Yes.
link |
00:19:14.920
Exactly. We don't know what that would be.
link |
00:19:16.920
Although we do know that computers can share information like language instantly.
link |
00:19:23.920
They can share many, many books in a second, so we could do that as well.
link |
00:19:30.920
If you look at what our brain does, it actually can manipulate different parameters.
link |
00:19:39.920
So, we talk about these large language models.
link |
00:19:45.920
I mean, I had written that it requires a certain amount of information in order to be effective
link |
00:19:57.920
and that we would not see AI really being effective until it got to that level.
link |
00:20:03.920
And we had large language models that were like 10 billion bytes, didn't work very well.
link |
00:20:08.920
They finally got to 100 billion bytes and now they work fairly well.
link |
00:20:12.920
And now we're going to a trillion bytes.
link |
00:20:15.920
If you say lambda has 100 billion bytes, what does that mean?
link |
00:20:22.920
Well, what if you had something that had one byte, one parameter?
link |
00:20:27.920
Maybe you want to tell whether or not something is an elephant or not.
link |
00:20:32.920
And so, you put in something that would detect its trunk.
link |
00:20:36.920
If it has a trunk, it's an elephant.
link |
00:20:38.920
If it doesn't have a trunk, it's not an elephant.
link |
00:20:40.920
And that would work fairly well.
link |
00:20:43.920
There's a few problems with it.
link |
00:20:45.920
I really wouldn't be able to tell what a trunk is, but anyway.
link |
00:20:49.920
And maybe other things other than elephants have trunks.
link |
00:20:52.920
You might get really confused.
link |
00:20:54.920
Yeah, exactly.
link |
00:20:55.920
I'm not sure which animals have trunks, but you know.
link |
00:20:59.920
How do you define a trunk?
link |
00:21:01.920
But yeah, that's one parameter.
link |
00:21:03.920
You can do okay.
link |
00:21:05.920
So these things have 100 billion parameters.
link |
00:21:08.920
So they're able to deal with very complex issues.
link |
00:21:11.920
All kinds of trunks.
link |
00:21:13.920
Human beings actually have a little bit more than that,
link |
00:21:15.920
but they're getting to the point where they can emulate humans.
link |
00:21:21.920
If we were able to connect this to our neocortex,
link |
00:21:26.920
we would basically add more of these abilities to make distinctions.
link |
00:21:34.920
And it could ultimately be much smarter and also be attached to information
link |
00:21:39.920
that we feel is reliable.
link |
00:21:42.920
So that's where we're headed.
link |
00:21:44.920
So you think that there will be a merger in the 30s
link |
00:21:48.920
and increasing amount of merging between the human brain and the AI brain.
link |
00:21:55.920
Exactly.
link |
00:21:57.920
And the AI brain is really an emulation of human beings.
link |
00:22:01.920
I mean, that's why we're creating them.
link |
00:22:03.920
Because human beings act the same way,
link |
00:22:06.920
and this is basically to amplify them.
link |
00:22:08.920
I mean, this amplifies our brain.
link |
00:22:11.920
It's a little bit clumsy to interact with,
link |
00:22:14.920
but it definitely is way beyond what we had 15 years ago.
link |
00:22:20.920
But the implementation becomes different,
link |
00:22:22.920
just like a bird versus the airplane.
link |
00:22:25.920
Even though the AI brain is an emulation,
link |
00:22:29.920
it starts adding features we might not otherwise have,
link |
00:22:33.920
like ability to consume a huge amount of information quickly,
link |
00:22:37.920
like look up thousands of Wikipedia articles in one take.
link |
00:22:42.920
Exactly.
link |
00:22:43.920
We can get, for example, to issues like simulated biology
link |
00:22:47.920
where it can simulate many different things at once.
link |
00:22:55.920
We already had one example of simulated biology,
link |
00:22:59.920
which is the Moderna vaccine.
link |
00:23:03.920
And that's going to be now the way in which we create medications.
link |
00:23:10.920
But they were able to simulate what each example of an mRNA
link |
00:23:15.920
would do to a human being,
link |
00:23:17.920
and they were able to simulate that quite reliably.
link |
00:23:20.920
And we actually simulated billions of different mRNA sequences,
link |
00:23:26.920
and they found the ones that were the best
link |
00:23:28.920
and they created the vaccine.
link |
00:23:30.920
And they did, and talked about doing that quickly,
link |
00:23:33.920
they did that in two days.
link |
00:23:35.920
Now, how long would a human being take
link |
00:23:37.920
to simulate billions of different mRNA sequences?
link |
00:23:40.920
I don't know that we could do it at all,
link |
00:23:42.920
but it would take many years.
link |
00:23:45.920
They did it in two days.
link |
00:23:48.920
One of the reasons that people didn't like vaccines
link |
00:23:52.920
is because it was done too quickly.
link |
00:23:55.920
It was done too fast.
link |
00:23:57.920
And they actually included the time it took to test it out,
link |
00:24:00.920
which was 10 months.
link |
00:24:02.920
So they figured, okay, it took 10 months to create this.
link |
00:24:05.920
Actually, it took us two days.
link |
00:24:08.920
And we also will be able to ultimately do the tests
link |
00:24:11.920
in a few days as well.
link |
00:24:13.920
Oh, because we can simulate how the body will respond to it.
link |
00:24:16.920
That's a little bit more complicated
link |
00:24:18.920
because the body has a lot of different elements,
link |
00:24:22.920
and we have to simulate all of that.
link |
00:24:24.920
But that's coming as well.
link |
00:24:26.920
So ultimately, we could create it in a few days
link |
00:24:29.920
and then test it in a few days, and it would be done.
link |
00:24:32.920
And we can do that with every type of medical insufficiency
link |
00:24:37.920
that we have.
link |
00:24:39.920
So curing all diseases,
link |
00:24:41.920
improving certain functions of the body,
link |
00:24:47.920
supplements, drugs, for recreation, for health,
link |
00:24:53.920
for performance, for productivity, all that kind of stuff.
link |
00:24:56.920
Well, that's where we're headed.
link |
00:24:58.920
Because I mean, right now we have a very inefficient way
link |
00:25:00.920
of creating these new medications.
link |
00:25:03.920
But we've already shown it.
link |
00:25:05.920
And the Moderna vaccine is actually the best
link |
00:25:08.920
of the vaccines we've had.
link |
00:25:11.920
And it literally took two days to create.
link |
00:25:15.920
And we'll get to the point where we can test it out also quickly.
link |
00:25:19.920
Are you impressed by AlphaFold and the solution to the protein folding,
link |
00:25:25.920
which essentially is simulating,
link |
00:25:28.920
modeling this primitive building block of life,
link |
00:25:32.920
which is a protein, in its 3D shape.
link |
00:25:35.920
It's pretty remarkable that they can actually predict
link |
00:25:38.920
what the 3D shape of these things are.
link |
00:25:41.920
But they did it with the same type of neural net,
link |
00:25:45.920
the one, for example, the GO test.
link |
00:25:50.920
So it's all the same.
link |
00:25:52.920
It's all the same.
link |
00:25:53.920
Awesome approaches.
link |
00:25:54.920
They took that same thing and just changed the rules to chess.
link |
00:25:58.920
And within a couple of days,
link |
00:26:01.920
AlphaFold now played a master level of chess greater than any human being.
link |
00:26:08.920
And the same thing then worked for AlphaFold,
link |
00:26:12.920
which no human had done.
link |
00:26:14.920
I mean, human beings could do,
link |
00:26:16.920
the best humans could maybe do 15, 20 percent
link |
00:26:22.920
of figuring out what the shape would be.
link |
00:26:25.920
And after a few takes, it ultimately did just about 100 percent.
link |
00:26:31.920
Do you still think the singularity will happen in 2045?
link |
00:26:36.920
And what does that look like?
link |
00:26:39.920
Once we can amplify our brain with computers directly,
link |
00:26:45.920
which will happen in the 2030s,
link |
00:26:47.920
that's going to keep growing.
link |
00:26:49.920
It's another whole theme,
link |
00:26:50.920
which is the exponential growth of computing power.
link |
00:26:54.920
So looking at price performance of computation from 1939 to 2021.
link |
00:26:59.920
So that starts with the very first computer
link |
00:27:02.920
actually created by German during World War II.
link |
00:27:05.920
And you might have thought that that might be significant,
link |
00:27:08.920
but actually the Germans didn't think computers were significant
link |
00:27:13.920
and they completely rejected it.
link |
00:27:15.920
And the second one was also the ZUSA II.
link |
00:27:19.920
And by the way, we're looking at a plot
link |
00:27:21.920
with the X axis being the year from 1935 to 2025.
link |
00:27:27.920
And on the Y axis and log scale is competition per second
link |
00:27:31.920
per constant dollar.
link |
00:27:33.920
So dollar normalized to inflation.
link |
00:27:36.920
And it's growing linearly on the log scale,
link |
00:27:39.920
which means it's growing exponentially.
link |
00:27:41.920
The third one was the British computer,
link |
00:27:43.920
which the Allies did take very seriously.
link |
00:27:46.920
And it cracked the German code
link |
00:27:50.920
and enables the British to win the Battle of Britain,
link |
00:27:54.920
which otherwise absolutely would not have happened
link |
00:27:56.920
if they hadn't cracked the code using that computer.
link |
00:28:00.920
But that's an exponential graph.
link |
00:28:02.920
So a straight line on that graph is exponential growth.
link |
00:28:06.920
And you see 80 years of exponential growth.
link |
00:28:10.920
And I would say about every five years,
link |
00:28:14.920
and this happened shortly before the pandemic,
link |
00:28:17.920
people saying, well, they call it Moore's law,
link |
00:28:20.920
which is not the correct, because it's not all Intel.
link |
00:28:24.920
In fact, it started decades before Intel was even created.
link |
00:28:28.920
It wasn't with transistors formed into a grid.
link |
00:28:33.920
So it's not just transistor count or transistor size.
link |
00:28:36.920
Right.
link |
00:28:37.920
It started with relays, then went to vacuum tubes,
link |
00:28:42.920
then went to individual transistors,
link |
00:28:45.920
and then to integrated circuits.
link |
00:28:48.920
And the integrated circuits actually starts
link |
00:28:53.920
like in the middle of this graph.
link |
00:28:55.920
And it has nothing to do with Intel.
link |
00:28:58.920
Intel actually was a key part of this.
link |
00:29:01.920
But a few years ago, they stopped making the fastest chips.
link |
00:29:06.920
But if you take the fastest chip of any technology in that year,
link |
00:29:13.920
you get this kind of graph.
link |
00:29:15.920
And it's definitely continuing for 80 years.
link |
00:29:18.920
So you don't think Moore's law broadly defined is dead?
link |
00:29:23.920
It's been declared dead multiple times throughout this process.
link |
00:29:28.920
I don't like the term Moore's law,
link |
00:29:30.920
because it has nothing to do with Moore or the Intel.
link |
00:29:33.920
But yes, the exponential growth of computing is continuing
link |
00:29:40.920
and has never stopped.
link |
00:29:42.920
From various sources.
link |
00:29:43.920
I mean, it went through World War II.
link |
00:29:45.920
It went through global recessions.
link |
00:29:48.920
It's just continuing.
link |
00:29:52.920
And if you continue that out, along with software gains,
link |
00:29:57.920
which is all another issue, and they really multiply,
link |
00:30:02.920
whatever you get from software gains,
link |
00:30:04.920
you multiply by the computer gains,
link |
00:30:07.920
you get faster and faster speed.
link |
00:30:10.920
This is actually the fastest computer models that have been created.
link |
00:30:15.920
And that actually expands roughly twice a year.
link |
00:30:19.920
Like every six months, it expands by two.
link |
00:30:22.920
So we're looking at a plot from 2010 to 2022.
link |
00:30:27.920
On the x axis is the publication date of the model,
link |
00:30:30.920
and perhaps sometimes the actual paper associated with it.
link |
00:30:33.920
And on the y axis is training compute and flops.
link |
00:30:39.920
And so basically, this is looking at the increase in the,
link |
00:30:44.920
not transistors, but the computational power of neural networks.
link |
00:30:50.920
Yes, the computational power that created these models.
link |
00:30:54.920
And that's doubled every six months.
link |
00:30:56.920
Which is even faster than transistor division.
link |
00:30:59.920
Yeah.
link |
00:31:01.920
Actually, since it goes faster than the amount of cost,
link |
00:31:05.920
this has actually become a greater investment to create these.
link |
00:31:11.920
But at any rate, by the time you get to 2045,
link |
00:31:15.920
we'll be able to multiply our intelligence many millions full.
link |
00:31:20.920
And it's just very hard to imagine what that will be like.
link |
00:31:24.920
And that's the singularity where we can't even imagine.
link |
00:31:27.920
Right. That's why we call it the singularity.
link |
00:31:29.920
It's a singularity in physics.
link |
00:31:31.920
Something gets sucked into its singularity and you can't tell what's going on in there.
link |
00:31:36.920
Because no information can get out of it.
link |
00:31:39.920
There's various problems with that.
link |
00:31:41.920
But that's the idea.
link |
00:31:43.920
It's too much beyond what we can imagine.
link |
00:31:48.920
Do you think it's possible we don't notice that what the singularity actually feels like?
link |
00:31:55.920
Is we just live through it with exponentially increasing cognitive capabilities.
link |
00:32:04.920
And we almost, because everything is moving so quickly,
link |
00:32:08.920
aren't really able to introspect that our life has changed.
link |
00:32:13.920
Yeah. But I mean, we will have that much greater capacity to understand things.
link |
00:32:18.920
So we should be able to look back.
link |
00:32:20.920
Looking at history, understand history.
link |
00:32:22.920
But we will need people, basically like you and me,
link |
00:32:26.920
to actually think about these things.
link |
00:32:28.920
Think about it.
link |
00:32:29.920
But we might be distracted by all the other sources of entertainment and fun.
link |
00:32:33.920
Because the exponential power of intellect is growing.
link |
00:32:39.920
But also, there'll be a lot of fun.
link |
00:32:43.920
The amount of ways you can have, you know.
link |
00:32:46.920
I mean, we already have a lot of fun with computer games and so on that are really quite remarkable.
link |
00:32:51.920
Thinking about the digital world, the metaverse, virtual reality.
link |
00:32:57.920
Will that have a component in this or will most of our advancement be in physical?
link |
00:33:01.920
Well, that's a little bit like Second Life.
link |
00:33:04.920
Although the Second Life actually didn't work very well because it couldn't actually handle too many people.
link |
00:33:09.920
And I don't think the metaverse has come to being.
link |
00:33:14.920
I think there will be something like that.
link |
00:33:16.920
It won't necessarily be from that one company.
link |
00:33:20.920
I mean, there's going to be competitors.
link |
00:33:22.920
But yes, we're going to live increasingly online.
link |
00:33:25.920
Particularly if our brains are online, I mean, how could we not be online?
link |
00:33:30.920
Do you think it's possible that, given this merger with AI, most of our meaningful interactions will be in this virtual world?
link |
00:33:41.920
Most of our life, we fall in love, we make friends, we come up with ideas, we do collaborations, we have fun.
link |
00:33:48.920
I actually know somebody who's marrying somebody that they never met.
link |
00:33:52.920
I think they just met her briefly before the wedding.
link |
00:33:56.920
But she actually fell in love with this other person, never having met them.
link |
00:34:04.920
And I think the love is real.
link |
00:34:09.920
That's a beautiful story.
link |
00:34:10.920
But do you think that story is one that might be experienced as opposed to by hundreds of thousands of people, but instead by hundreds of millions of people?
link |
00:34:21.920
I mean, it really gives you appreciation for these virtual ways of communicating.
link |
00:34:27.920
And if anybody can do it, then it's really not such a freak story.
link |
00:34:33.920
So I think more and more people will do that.
link |
00:34:36.920
But that's turning our back on our entire history of evolution.
link |
00:34:40.920
The old days, we used to fall in love by holding hands and sitting by the fire, that kind of stuff.
link |
00:34:48.920
Actually, I have five patents on where you can hold hands, even if you're separated.
link |
00:34:55.920
Great.
link |
00:34:57.920
So the touch, the sense, it's all just senses.
link |
00:35:00.920
It's all just, it's not just that you're touching someone or not.
link |
00:35:06.920
There's a whole way of doing it and it's very subtle.
link |
00:35:09.920
But ultimately, we can emulate all of that.
link |
00:35:16.920
Are you excited by that future?
link |
00:35:18.920
Do you worry about that future?
link |
00:35:22.920
I have certain worries about the future, but not that virtual touch.
link |
00:35:28.920
Well, I agree with you.
link |
00:35:30.920
You described six stages in the evolution of information processing in the universe as you started to describe.
link |
00:35:38.920
Can you maybe talk through some of those stages from the physics and chemistry to DNA and brains
link |
00:35:45.920
and then to the very end, to the very beautiful end of this process?
link |
00:35:51.920
Well, it actually gets more rapid.
link |
00:35:54.920
So physics and chemistry, that's how we started.
link |
00:35:59.920
So from the very beginning of the universe.
link |
00:36:01.920
We had lots of electrons and various things traveling around.
link |
00:36:06.920
And that took actually many billions of years, kind of jumping ahead here to kind of some of the last stages
link |
00:36:15.920
where we have things like love and creativity.
link |
00:36:18.920
It's really quite remarkable that that happens.
link |
00:36:22.920
But finally, physics and chemistry created biology and DNA.
link |
00:36:29.920
And now you had actually one type of molecule that described the cutting edge of this process.
link |
00:36:38.920
And we go from physics and chemistry to biology.
link |
00:36:43.920
And finally, biology created brains.
link |
00:36:47.920
Not everything that's created by biology has a brain, but eventually brains came along.
link |
00:36:55.920
And all of this is happening faster and faster.
link |
00:36:59.920
It created increasingly complex organisms.
link |
00:37:03.920
Another key thing is actually not just brains, but our thumb.
link |
00:37:11.920
Because there's a lot of animals with brains even bigger than humans.
link |
00:37:18.920
Elephants have a bigger brain.
link |
00:37:20.920
Whales have a bigger brain.
link |
00:37:22.920
But they've not created technology because they don't have a thumb.
link |
00:37:28.920
So that's one of the really key elements in the evolution of humans.
link |
00:37:33.920
This physical manipulator device that's useful for puzzle solving in the physical reality.
link |
00:37:40.920
So I could think, I could look at a tree and go, oh, I could actually trip that branch down
link |
00:37:45.920
and eliminate the leaves and carve a tip on it.
link |
00:37:49.920
And I would create technology.
link |
00:37:51.920
And you can't do that if you don't have a thumb.
link |
00:37:58.920
So thumbs then created technology.
link |
00:38:03.920
And technology also had a memory.
link |
00:38:07.920
And now those memories are competing with the scale and scope of human beings.
link |
00:38:14.920
And ultimately we'll go beyond it.
link |
00:38:17.920
And then we're going to merge human technology with human intelligence
link |
00:38:26.920
and understand how human intelligence works, which I think we already do.
link |
00:38:32.920
And we're putting that into our human technology.
link |
00:38:38.920
So create the technology inspired by our own intelligence
link |
00:38:42.920
and then that technology supersedes us in terms of its capabilities.
link |
00:38:46.920
And we ride along.
link |
00:38:48.920
Or do you ultimately see it as...
link |
00:38:50.920
We ride along, but a lot of people don't see that.
link |
00:38:52.920
They say, well, you've got humans and you've got machines
link |
00:38:55.920
and there's no way we can ultimately compete with humans.
link |
00:38:59.920
And you can already see that.
link |
00:39:02.920
Lisa Dahl, who's like the best Go player in the world,
link |
00:39:06.920
says he's not going to play Go anymore.
link |
00:39:09.920
Because playing Go for humans, that was like the ultimate in intelligence
link |
00:39:14.920
because no one else could do that.
link |
00:39:17.920
But now a machine can actually go way beyond him.
link |
00:39:21.920
And so he says, well, there's no point playing it anymore.
link |
00:39:24.920
That may be more true for games than it is for life.
link |
00:39:29.920
I think there's a lot of benefit to working together with AI in regular life.
link |
00:39:33.920
So if you were to put a probability on it,
link |
00:39:37.920
is it more likely that we merge with AI or AI replaces us?
link |
00:39:42.920
A lot of people just think computers come along and they compete with them.
link |
00:39:47.920
We can't really compete and that's the end of it.
link |
00:39:51.920
As opposed to them increasing our abilities.
link |
00:39:56.920
And if you look at most technology, it increases our abilities.
link |
00:40:01.920
I mean, look at the history of work.
link |
00:40:06.920
Look at what people did 100 years ago.
link |
00:40:10.920
Does any of that exist anymore?
link |
00:40:12.920
People, I mean, if you were to predict that all of these jobs would go away
link |
00:40:18.920
and would be done by machines, people would say, well, there's going to be,
link |
00:40:22.920
no one's going to have jobs and it's going to be massive unemployment.
link |
00:40:28.920
But I show in this book that's coming out,
link |
00:40:33.920
the amount of people that are working, even as a percentage of the population,
link |
00:40:39.920
has gone way up.
link |
00:40:41.920
We're looking at the x axis year from 1774 to 2024
link |
00:40:45.920
and on the y axis, personal income per capita in constant dollars
link |
00:40:50.920
and it's growing super linearly.
link |
00:40:52.920
I mean, it's 2021 constant dollars and it's gone way up.
link |
00:40:57.920
That's not what you would predict,
link |
00:41:00.920
given that we would predict that all these jobs would go away.
link |
00:41:04.920
But the reason it's gone up is because we've basically enhanced our own capabilities
link |
00:41:09.920
by using these machines as opposed to them just competing with us.
link |
00:41:13.920
That's a key way in which we're going to be able to become far smarter than we are now
link |
00:41:18.920
by increasing the number of different parameters we can consider in making a decision.
link |
00:41:25.920
I was very fortunate.
link |
00:41:27.920
I am very fortunate to be able to get a glimpse preview of your upcoming book,
link |
00:41:34.920
Singularity's Nearer.
link |
00:41:37.920
One of the themes outside of just discussing the increasing exponential growth of technology,
link |
00:41:44.920
one of the themes is that things are getting better in all aspects of life
link |
00:41:50.920
and you talked just about this.
link |
00:41:53.920
One of the things you're saying is with jobs,
link |
00:41:55.920
so let me just ask about that.
link |
00:41:57.920
There is a big concern that automation, especially powerful AI, will get rid of jobs.
link |
00:42:06.920
People will lose jobs.
link |
00:42:07.920
As you were saying, the senses throughout the history of the 20th century,
link |
00:42:13.920
automation did not do that ultimately.
link |
00:42:16.920
The question is, will this time be different?
link |
00:42:20.920
That is the question.
link |
00:42:22.920
Will this time be different?
link |
00:42:24.920
It really has to do with how quickly we can merge with this type of intelligence.
link |
00:42:30.920
The Lambda GPT3 is out there and maybe it has overcome some of its key problems.
link |
00:42:39.920
We really have an enhanced human intelligence that might be a negative scenario.
link |
00:42:49.920
That is why we create technologies to enhance ourselves.
link |
00:42:55.920
I believe we will be enhanced when I'm just going to sit here with 300 million modules in our neocortex.
link |
00:43:08.920
We're going to be able to go beyond that because that's useful,
link |
00:43:15.920
so we can multiply that by 10, 100,000 million.
link |
00:43:25.920
You might think, well, what's the point of doing that?
link |
00:43:29.920
It's like asking somebody that's never heard music, well, what's the value of music?
link |
00:43:36.920
I mean, you can't appreciate it until you've created it.
link |
00:43:40.920
There's some worry that there will be a wealth disparity, a class or wealth disparity.
link |
00:43:47.920
Only the rich people will be, basically, the rich people will first have access to this kind of thing,
link |
00:43:54.920
and then because of this kind of thing, because the ability to merge will get richer exponentially faster.
link |
00:44:01.920
I say that just like cell phones.
link |
00:44:04.920
There's like four billion cell phones in the world today.
link |
00:44:09.920
In fact, when cell phones first came out, you had to be fairly wealthy.
link |
00:44:14.920
They weren't very inexpensive.
link |
00:44:16.920
You had to have some wealth in order to afford them.
link |
00:44:19.920
Yeah, there were these big, sexy phones.
link |
00:44:21.920
And they didn't work very well.
link |
00:44:23.920
They did almost nothing.
link |
00:44:25.920
So you can only afford these things if you're wealthy at a point where they really don't work very well.
link |
00:44:34.920
So achieving scale and making it inexpensive is part of making the thing work well.
link |
00:44:41.920
Exactly.
link |
00:44:42.920
So these are not totally cheap, but they're pretty cheap.
link |
00:44:46.920
You can get them for a few hundred dollars.
link |
00:44:51.920
Especially given the kind of things it provides for you.
link |
00:44:54.920
There's a lot of people in the third world that have very little, but they have a smartphone.
link |
00:44:59.920
Yeah, absolutely.
link |
00:45:01.920
And the same will be true with AI.
link |
00:45:03.920
I mean, I see homeless people have their own cell phones.
link |
00:45:06.920
Yeah, so your sense is any kind of advanced technology will take the same trajectory.
link |
00:45:13.920
Right.
link |
00:45:14.920
It ultimately becomes cheap and will be affordable.
link |
00:45:18.920
I probably would not be the first person to put something in my brain to connect to computers,
link |
00:45:27.920
because I think it will have limitations.
link |
00:45:29.920
But once it's really perfected, and at that point it will be pretty inexpensive,
link |
00:45:35.920
I think it will be pretty affordable.
link |
00:45:38.920
So in which other ways, as you outline your book, is life getting better?
link |
00:45:44.920
Well, I mean, I have 50 charts in there where everything is getting better.
link |
00:45:50.920
I think there's a kind of cynicism about, even if you look at extreme poverty, for example.
link |
00:45:57.920
For example, this is actually a poll taken on extreme poverty,
link |
00:46:02.920
and the people who were asked, has poverty gotten better or worse?
link |
00:46:07.920
And the options are increased by 50%, increased by 25%, remain the same,
link |
00:46:13.920
decreased by 25%, decreased by 50%.
link |
00:46:16.920
If you're watching this or listening to this, try to vote for yourself.
link |
00:46:20.920
70% thought it had gotten worse, and that's the general impression.
link |
00:46:26.920
88% thought it had gotten worse and remained the same.
link |
00:46:31.920
Only 1% thought it decreased by 50%, and that is the answer.
link |
00:46:36.920
It actually decreased by 50%.
link |
00:46:38.920
So only 1% of people got the right optimistic estimate of how poverty is...
link |
00:46:44.920
Right, and this is the reality, and it's true of almost everything you look at.
link |
00:46:50.920
You don't want to go back 100 years or 50 years.
link |
00:46:53.920
Things were quite miserable then, but we tend not to remember that.
link |
00:47:00.920
So literacy rate increasing over the past few centuries across all the different nations,
link |
00:47:06.920
nearly to 100% across many of the nations in the world.
link |
00:47:11.920
It's gone way up, average years of education have gone way up.
link |
00:47:14.920
Life expectancy is also increasing.
link |
00:47:17.920
Life expectancy was 48 in 1900.
link |
00:47:23.920
And it's over 80 now.
link |
00:47:25.920
And it's going to continue to go up, particularly as we get into more advanced stages of simulated biology.
link |
00:47:32.920
For life expectancy, these trends are the same for at birth, age 1, age 5, age 10,
link |
00:47:37.920
so it's not just the infamortality.
link |
00:47:39.920
And I have 50 more graphs in the book about all kinds of things.
link |
00:47:44.920
Even spread of democracy, which might bring up some sort of controversial issues,
link |
00:47:51.920
it still has gone way up.
link |
00:47:54.920
Well, that one is gone way up, but that one is a bumpy road, right?
link |
00:47:58.920
Exactly, and somebody might represent democracy and go backwards,
link |
00:48:04.920
but we basically had no democracies before the creation of the United States,
link |
00:48:10.920
which is a little over two centuries ago, which is on the scale of human history isn't that long.
link |
00:48:16.920
Do you think superintelligence systems will help with democracy?
link |
00:48:22.920
So what is democracy?
link |
00:48:24.920
Democracy is giving a voice to the populace and having their ideas, having their beliefs,
link |
00:48:33.920
having their views represented.
link |
00:48:37.920
Well, I hope so.
link |
00:48:40.920
I mean, we've seen social networks can spread conspiracy theories,
link |
00:48:48.920
which have been quite negative, being, for example, being against any kind of stuff that would help your health.
link |
00:48:57.920
So those kinds of ideas have, on social media, what you notice is they increase engagement, so dramatic division increases engagement.
link |
00:49:09.920
Do you worry about AI systems that will learn to maximize that division?
link |
00:49:15.920
I mean, I do have some concerns about this, and I have a chapter in the book about the perils of advanced AI,
link |
00:49:29.920
spreading misinformation on social networks is one of them, but there are many others.
link |
00:49:35.920
What's the one that worries you the most that we should think about to try to avoid?
link |
00:49:46.920
Well, it's hard to choose.
link |
00:49:49.920
We do have the nuclear power that evolved when I was a child.
link |
00:49:56.920
I remember we would actually do these drills against a nuclear war. We'd get under our desks and put our hands behind our heads to protect us from a nuclear war.
link |
00:50:10.920
It seems to work. We're still around.
link |
00:50:14.920
You're protected.
link |
00:50:16.920
But that's still a concern, and there are dangerous situations that can take place in biology.
link |
00:50:26.920
Someone could create a virus that's very, I mean, we have viruses that are hard to spread, and they can be very dangerous.
link |
00:50:41.920
And we have viruses that are easy to spread, but they're not so dangerous.
link |
00:50:49.920
Somebody could create something that would be very easy to spread and very dangerous and be very hard to stop.
link |
00:50:57.920
It could be something that would spread without people noticing because people could get it, they'd have no symptoms,
link |
00:51:05.920
and then everybody would get it, and then symptoms would occur maybe a month later.
link |
00:51:11.920
And that actually doesn't occur normally because if we were to have a problem with that, we wouldn't exist.
link |
00:51:26.920
So the fact that humans exist means that we don't have viruses that can spread easily and kill us because otherwise we wouldn't exist.
link |
00:51:36.920
Yeah, viruses don't want to do that. They want to spread and keep the host alive somewhat.
link |
00:51:43.920
So you can describe various dangers with biology, also nanotechnology, which we actually haven't experienced yet, but there are people that are creating nanotechnology and I described that in the book.
link |
00:51:57.920
Now you're excited by the possibilities of nanotechnology, of nanobots, of being able to do things inside our body, inside our mind that's going to help.
link |
00:52:06.920
What's exciting, what's terrifying about nanobots?
link |
00:52:10.920
What's exciting is that that's a way to communicate with our neocortex because each neocortex is pretty small and you need a small entity that can actually get in there and establish a communication channel.
link |
00:52:25.920
And that's going to really be necessary to connect our brains to AI within ourselves because otherwise it would be hard for us to compete with it.
link |
00:52:38.920
It's a high bandwidth way.
link |
00:52:40.920
Yeah, and that's key actually because a lot of the things like Neuralink are really not high bandwidth yet.
link |
00:52:49.920
So nanobots is the way you achieve high bandwidth. How much intelligence would those nanobots have?
link |
00:52:55.920
Yeah, they don't need a lot. Just enough to basically establish a communication channel to one nanobot.
link |
00:53:04.920
It's primarily about communication between external computing devices and our biological thinking machine.
link |
00:53:14.920
What worries you about nanobots? Is it similar to the viruses?
link |
00:53:19.920
Well, I mean, this is the Great Goo Challenge. If you had a nanobot that wanted to create any kind of entity and repeat itself and was able to operate in a natural environment,
link |
00:53:40.920
you could turn everything into that entity and basically destroy all biological life.
link |
00:53:50.920
So you mentioned nuclear weapons.
link |
00:53:53.920
Yeah.
link |
00:53:56.920
I'd love to hear your opinion about the 21st century and whether you think we might destroy ourselves.
link |
00:54:04.920
And maybe your opinion, if it has changed by looking at what's going on in Ukraine, that we could have a hot war with nuclear powers involved and the tensions building
link |
00:54:20.920
and a seeming forgetting of how terrifying and destructive nuclear weapons are.
link |
00:54:28.920
Do you think humans might destroy ourselves in the 21st century and if we do, how? And how do we avoid it?
link |
00:54:37.920
I don't think that's going to happen, despite the terrors of that war. It is a possibility, but I mean, I don't...
link |
00:54:49.920
It's unlikely in your mind.
link |
00:54:51.920
Yeah. Even with the tensions we've had with this one nuclear power plant that's been taken over, it's very tense.
link |
00:55:04.920
But I don't actually see a lot of people worrying that that's going to happen. I think we'll avoid that.
link |
00:55:11.920
We had two nuclear bombs go off in 45, so now we're 77 years later.
link |
00:55:20.920
Yeah, we're doing pretty good.
link |
00:55:22.920
We've never had another one go off through anger.
link |
00:55:26.920
People forget. People forget the lessons of history.
link |
00:55:30.920
Well, yeah. I mean, I am worried about it. I mean, that is definitely a challenge.
link |
00:55:36.920
But you believe that we'll make it out and ultimately, superintelligent AI will help us make it out, as opposed to destroy us.
link |
00:55:46.920
I think so, but we do have to be mindful of these dangers, and there are other dangers besides nuclear weapons.
link |
00:55:55.920
So to get back to merging with AI, will we be able to upload our mind in a computer in a way where we might even transcend the constraints of our bodies?
link |
00:56:10.920
So copy our mind into a computer and leave the body behind?
link |
00:56:15.920
Let me describe one thing I've already done with my father.
link |
00:56:19.920
That's a great story.
link |
00:56:21.920
So we created technology. This is public came out, I think, six years ago, where you could ask any question.
link |
00:56:32.920
And the release product, which I think is still on the market, it would read 200,000 books.
link |
00:56:39.920
And then find the one sentence in 200,000 books that best answered your question.
link |
00:56:47.920
It's actually quite interesting. You can ask all kinds of questions and you get the best answer in 200,000 books.
link |
00:56:55.920
But I was also able to take it and not go through 200,000 books, but go through a book that I put together,
link |
00:57:05.920
which is basically everything my father had written.
link |
00:57:09.920
So everything he had written, I had gathered, and we created a book, everything that Frederickers all had written.
link |
00:57:19.920
Now, I didn't think this actually would work that well because stuff he had written was stuff about how to lay out.
link |
00:57:29.920
I mean, he did directed choral groups and music groups.
link |
00:57:38.920
And he would be laying out how the people should, where they should sit and how to fund this and all kinds of things that really didn't seem that interesting.
link |
00:57:54.920
And yet, when you ask a question, it would go through it and it would actually give you a very good answer.
link |
00:58:03.920
So I said, well, you know, who's the most interesting composer?
link |
00:58:07.920
And he said, well, definitely Brahms.
link |
00:58:09.920
He would go on about how Brahms was fabulous and talk about the importance of music education.
link |
00:58:16.920
You could have an essential question and answer, a conversation with him.
link |
00:58:21.920
And I have a conversation with him, which was actually more interesting than talking to him because if you talk to him,
link |
00:58:26.920
he'd be concerned about how they're going to lay out this property to give a choral group.
link |
00:58:33.920
He'd be concerned about the day to day versus the big questions.
link |
00:58:36.920
Exactly, yeah.
link |
00:58:38.920
And you did ask about the meaning of life and he answered love.
link |
00:58:42.920
Yeah.
link |
00:58:45.920
Do you miss him?
link |
00:58:48.920
Yes, I do.
link |
00:58:50.920
You know, you get used to missing somebody after 52 years and I didn't really have intelligent conversations with him until later in life.
link |
00:59:06.920
In the last few years, he was sick, which meant he was home a lot and I was actually able to talk to him about different things like music and other things.
link |
00:59:16.920
So I miss that very much.
link |
00:59:19.920
What did you learn about life from your father?
link |
00:59:24.920
What part of him is with you now?
link |
00:59:28.920
He was devoted to music and when he would create something to music, he'd put them in a different world.
link |
00:59:36.920
Otherwise, he was very shy and if people got together, he tended not to interact with people just because of his shyness.
link |
00:59:48.920
But when he created music, he was like a different person.
link |
00:59:54.920
Do you have that in you, that kind of light that shines?
link |
00:59:59.920
I got involved with technology at like age five.
link |
01:00:05.920
And you fell in love with it in the same way he did with music?
link |
01:00:08.920
Yeah.
link |
01:00:10.920
I remember this actually happened with my grandmother. She had a manual typewriter and she wrote a book, One Life is Not Enough.
link |
01:00:22.920
It's actually a good title for a book I might write.
link |
01:00:26.920
And it was about a school she had created.
link |
01:00:29.920
Well, actually, her mother created it.
link |
01:00:33.920
My mother's mother's mother created the school in 1868 and it was the first school in Europe that provided higher education for girls.
link |
01:00:42.920
It went through 14th grade.
link |
01:00:45.920
If you were a girl and you were lucky enough to get an education at all, it would go through like ninth grade.
link |
01:00:52.920
And many people didn't have any education as a girl.
link |
01:00:56.920
This went through 14th grade.
link |
01:01:00.920
Her mother created it. She took it over.
link |
01:01:03.920
And the book was about the history of the school and her involvement with it.
link |
01:01:12.920
When she presented it to me, I was not so interested in the story of the school.
link |
01:01:19.920
But I was totally amazed with this manual typewriter.
link |
01:01:24.920
I mean, here was something you could put a blank piece of paper into and you could turn it into something that looked like it came from a book.
link |
01:01:32.920
And you could actually type on it and it looked like it came from a book. It was just amazing to me.
link |
01:01:38.920
And I could see actually how it worked.
link |
01:01:41.920
And I was also interested in magic.
link |
01:01:46.920
But in magic, if somebody actually knows how it works, the magic goes away.
link |
01:01:51.920
The magic doesn't stay there if you actually understand how it works.
link |
01:01:55.920
But he was technology. I didn't have that word when I was five or six.
link |
01:02:00.920
And the magic was still there for you?
link |
01:02:02.920
The magic was still there, even if you knew how it worked.
link |
01:02:06.920
So I became totally interested in this and then went around, collected little pieces of mechanical objects from bicycles, from broken radios.
link |
01:02:17.920
I went through the neighborhood. This was an era where you would allow five or six year olds who run through the neighborhood and do this.
link |
01:02:25.920
We don't do that anymore.
link |
01:02:27.920
But I didn't know how to put them together.
link |
01:02:30.920
I said, if I could just figure out how to put these things together, I could solve any problem.
link |
01:02:36.920
And I actually remember talking to these very old girls, I think they were 10,
link |
01:02:44.920
and telling them, if I could just figure this out, we could fly, we could do anything.
link |
01:02:49.920
And they said, well, you have quite an imagination.
link |
01:02:55.920
And then when I was in third grade, so it was like eight, created like a virtual reality theater where people could come on stage and they could move their arms.
link |
01:03:09.920
And all of it was controlled through one control box. It was all done with mechanical technology.
link |
01:03:16.920
And it was a big hit in my third grade class.
link |
01:03:20.920
And then I went on to do things in junior high school science fairs and high school science fairs.
link |
01:03:27.920
I won the Westinghouse Science Talent Search.
link |
01:03:30.920
So I mean, I became committed to technology when I was five or six years old.
link |
01:03:38.920
You've talked about how you use lucid dreaming to think, to come up with ideas as a source of creativity.
link |
01:03:45.920
Because you maybe talk through that, maybe the process of how to, you've invented a lot of things.
link |
01:03:53.920
You've came up and thought through some very interesting ideas.
link |
01:03:57.920
What advice would you give or can you speak to the process of thinking of how to think?
link |
01:04:04.920
How to think creatively?
link |
01:04:06.920
Well, I mean, sometimes I will think through in a dream and try to interpret that.
link |
01:04:11.920
But I think the key issue that I would tell younger people is to put yourself in the position that what you're trying to create already exists.
link |
01:04:29.920
And then you're explaining how it works.
link |
01:04:36.920
Exactly.
link |
01:04:37.920
That's really interesting.
link |
01:04:38.920
You paint a world that you would like to exist.
link |
01:04:42.920
You think it exists and reverse it.
link |
01:04:45.920
And then you actually imagine you're giving a speech about how you created this.
link |
01:04:49.920
Well, you'd have to then work backwards as to how you would create it in order to make it work.
link |
01:04:56.920
That's brilliant.
link |
01:04:57.920
And that requires some imagination to some first principles thinking.
link |
01:05:03.920
You have to visualize that world.
link |
01:05:05.920
That's really interesting.
link |
01:05:07.920
And generally when I talk about things we're trying to invent, I would use the present tense as if it already exists.
link |
01:05:15.920
Not just to give myself that confidence, but everybody else is working on it.
link |
01:05:21.920
We just have to kind of do all the steps in order to make it actual.
link |
01:05:30.920
How much of a good idea is about timing?
link |
01:05:34.920
How much is it about your genius versus that its time has come?
link |
01:05:40.920
Timing's very important.
link |
01:05:42.920
I mean, that's really why I got into futurism.
link |
01:05:45.920
I wasn't inherently a futurist, but that's not really my goal.
link |
01:05:53.920
That's really to figure out when things are feasible.
link |
01:05:57.920
We see that now with large scale models.
link |
01:06:00.920
The large scale models like GPT3, it started two years ago.
link |
01:06:08.920
Four years ago it wasn't feasible.
link |
01:06:10.920
In fact, they did create GPT2, which didn't work.
link |
01:06:17.920
So it required a certain amount of timing having to do with this exponential growth of computing power.
link |
01:06:26.920
So futurism in some sense is a study of timing.
link |
01:06:30.920
Trying to understand how the world will evolve and when will the capacity for certain ideas emerge.
link |
01:06:37.920
And that's become a thing in itself and to try to time things in the future.
link |
01:06:42.920
But really its original purpose was to time my products.
link |
01:06:49.920
I mean, I did OCR in the 1970s because OCR doesn't require a lot of computation.
link |
01:07:00.920
Optical character recognition.
link |
01:07:02.920
So we were able to do that in the 70s and I waited until the 80s to address speech recognition since it requires more computation.
link |
01:07:13.920
So you were thinking through timing when you're developing those things, has its time come?
link |
01:07:20.920
And that's how you've developed that brain power to start to think in a futurist sense.
link |
01:07:26.920
And how will the world look like in 2045 and work backwards and how it gets there.
link |
01:07:33.920
But that has to become a thing in itself because looking at what things will be like in the future reflects such dramatic changes in how humans will live.
link |
01:07:47.920
That was worth communicating also.
link |
01:07:50.920
So you developed that muscle of predicting the future and then applied broadly and started to discuss how it changes the world of technology,
link |
01:08:01.920
how it changes the world of human life on earth.
link |
01:08:05.920
In Danielle, one of your books, you write about someone who has the courage to question assumptions that limit human imagination to solve problems.
link |
01:08:15.920
And you also give advice on how each of us can have this kind of courage.
link |
01:08:22.920
Well, it's good that you picked that quote because I think that symbolizes what Danielle is about.
link |
01:08:27.920
Courage. So how can each of us have that courage to question assumptions?
link |
01:08:32.920
I mean, we see that when people can go beyond the current realm and create something that's new.
link |
01:08:42.920
I mean, take Uber, for example. Before that existed, you never thought that that was to be feasible and it did require changes in the way people work.
link |
01:08:53.920
Is there practical advice, as you give in the book, about what each of us can do to be a Danielle?
link |
01:09:03.920
Well, she looks at the situation and tries to imagine how she can overcome various obstacles and then she goes for it.
link |
01:09:16.920
And she's a very good communicator so she can communicate these ideas to other people.
link |
01:09:23.920
And there's practical advice of learning to program and recording your life and things of this nature. Become a physicist.
link |
01:09:33.920
So you list a bunch of different suggestions of how to throw yourself into this world.
link |
01:09:38.920
Yeah. I mean, it's kind of an idea how young people can actually change the world by learning all of these different skills.
link |
01:09:51.920
And at the core of that is the belief that you can change the world, that your mind, your body can change the world.
link |
01:09:59.920
Yeah. That's right.
link |
01:10:01.920
And not letting anyone else tell you otherwise.
link |
01:10:05.920
That's very good, exactly.
link |
01:10:07.920
When we upload the story you told about your dad and having a conversation with him, we're talking about uploading your mind to the computer.
link |
01:10:20.920
Do you think we'll have a future with something you call afterlife?
link |
01:10:24.920
We'll have avatars that mimic increasingly better and better our behavior, our appearance, all that kind of stuff.
link |
01:10:32.920
Even those are perhaps no longer with us.
link |
01:10:35.920
Yes. I mean, we need some information about them.
link |
01:10:41.920
I mean, think about my father. I have what he wrote. Now, he didn't have a word processor, so he didn't actually write that much.
link |
01:10:52.920
And our memories of him aren't perfect. So how do you even know if you've created something that's satisfactory?
link |
01:11:00.920
Now, you could do a Frederick Kurzweil Turing test. It seems like Frederick Kurzweil to me.
link |
01:11:06.920
But the people who remember him, like me, don't have a perfect memory.
link |
01:11:13.920
Is there such a thing as a perfect memory? Maybe the whole point is for him to make you feel a certain way.
link |
01:11:24.920
Yeah. Well, I think that would be the goal.
link |
01:11:27.920
And that's the connection we have with loved ones. It's not really based on very strict definition of truth. It's more about the experiences we share.
link |
01:11:36.920
And they get morphed through memory. But ultimately, they make a smile.
link |
01:11:40.920
I think we definitely can do that. And that would be very worthwhile.
link |
01:11:45.920
So do you think we'll have a world of replicants, of copies? There'll be a bunch of records while I could hang out with one.
link |
01:11:54.920
I can download it for five bucks and have a best friend, Ray.
link |
01:12:00.920
And you, the original copy, wouldn't even know about it.
link |
01:12:06.920
Do you think that world is, first of all, do you think that world is feasible?
link |
01:12:12.920
And do you think there's ethical challenges there?
link |
01:12:15.920
How would you feel about me hanging out with Ray Kurzweil and you not knowing about it?
link |
01:12:21.920
It doesn't strike me as a problem.
link |
01:12:27.920
Which you? The original?
link |
01:12:29.920
Would that cause a problem for you?
link |
01:12:33.920
No, I would really very much enjoy it.
link |
01:12:36.920
No, not just hanging out with me, but if somebody hanging out with you, a replicant of you.
link |
01:12:43.920
Well, I think I would start, it sounds exciting, but then what if they start doing better than me and take over my friend group?
link |
01:12:54.920
And then because they may be an imperfect copy or there may be more social or these kinds of things.
link |
01:13:04.920
And then I become like the old version that's not nearly as exciting.
link |
01:13:09.920
Maybe they're a copy of the best version of me on a good day.
link |
01:13:13.920
But if you hang out with a replicant of me and that turned out to be successful, I'd feel proud of that person because it's based on me.
link |
01:13:24.920
But it is a kind of death of this version of you.
link |
01:13:31.920
Well, not necessarily. I mean, you can still be alive, right?
link |
01:13:35.920
Okay, so it's like having kids and you're proud that they've done even more than you were able to do.
link |
01:13:42.920
Yeah, exactly.
link |
01:13:47.920
It does bring up new issues, but it seems like an opportunity.
link |
01:13:54.920
Well, that replicant should probably have the same rights as you do.
link |
01:13:59.920
But that gets into a whole issue because when a replicant occurs, they're not necessarily going to have your rights.
link |
01:14:09.920
And if a replicant occurs to somebody who's already dead, do they have all the obligations that the original person had?
link |
01:14:20.920
Do they have all the agreements that they had?
link |
01:14:25.920
I think you're going to have to have laws that say, yes.
link |
01:14:30.920
If you want to create a replicant, they have to have all the same rights as human rights.
link |
01:14:35.920
Well, you don't know. Somebody can create a replicant and say, well, it's a replicant, but I didn't bother getting their rights.
link |
01:14:41.920
But that would be illegal. If you do that, you have to do that in the black market.
link |
01:14:47.920
If you want to get an official replicant.
link |
01:14:49.920
It's not so easy. It's supposed to create multiple replicants.
link |
01:14:55.920
The original rights may be for one person and not for a whole group of people.
link |
01:15:04.920
Sure.
link |
01:15:08.920
So there has to be at least one.
link |
01:15:10.920
And then all the other ones kind of share the rights.
link |
01:15:14.920
I don't think that that's very difficult to conceive for us humans, the idea of this country.
link |
01:15:20.920
You create a replicant that has certain... I mean, I've talked to people about this, including my wife, who would like to get back her father.
link |
01:15:31.920
And she doesn't worry about who has rights to what.
link |
01:15:37.920
She would have somebody that she could visit with and might give her some satisfaction.
link |
01:15:43.920
And she wouldn't care about any of these other rights.
link |
01:15:48.920
What does your wife think about multiple arrears as well?
link |
01:15:52.920
Have you had that discussion?
link |
01:15:54.920
I haven't addressed that with her.
link |
01:15:57.920
I think ultimately that's an important question, loved ones, how they feel about...
link |
01:16:02.920
There's something about love.
link |
01:16:04.920
Well, that's the key thing, right? If the loved ones rejected, it's not going to work very well.
link |
01:16:11.920
So the loved ones really are the key determinant whether or not this works or not.
link |
01:16:18.920
But there's also ethical rules.
link |
01:16:21.920
We have to contend with the idea and we have to contend with that idea with AI.
link |
01:16:27.920
But what's going to motivate it is...
link |
01:16:29.920
I mean, I talk to people who really miss people who are gone and they would love to get something back, even if it isn't perfect.
link |
01:16:39.920
And that's what's going to motivate this.
link |
01:16:46.920
And that person lives on in some form.
link |
01:16:50.920
And the more data we have, the more we're able to reconstruct that person and allow them to live on.
link |
01:16:58.920
And eventually as we go forward, we're going to have more and more of this data
link |
01:17:02.920
because we're going to have nanobots that are inside our neocortex and we're going to collect a lot of data.
link |
01:17:10.920
In fact, anything that's data is always collected.
link |
01:17:15.920
There is something a little bit sad, which is becoming...
link |
01:17:20.920
Or maybe it's hopeful, which is more and more common these days,
link |
01:17:26.920
which when a person passes away, you'll have their Twitter account and you have the last tweet they tweeted.
link |
01:17:34.920
And you can recreate them now with large language models and so on.
link |
01:17:38.920
You can create somebody that's just like them and can actually continue to communicate.
link |
01:17:44.920
I think that's really exciting because I think in some sense, like if I were to die today,
link |
01:17:51.920
in some sense I would continue on if I continued tweeting.
link |
01:17:55.920
I tweet there for I am.
link |
01:17:58.920
Yeah, well, I mean, that's one of the advantages of a replicant.
link |
01:18:03.920
They can recreate the communications of that person.
link |
01:18:09.920
Do you hope, do you think, do you hope humans will become a multiplanetary species?
link |
01:18:16.920
You've talked about the phases, the six epochs, and one of them is reaching out into the stars in part.
link |
01:18:23.920
Yes, but the kind of attempts we're making now to go to other planetary objects doesn't excite me that much
link |
01:18:35.920
because it's not really advancing anything.
link |
01:18:38.920
It's not efficient enough?
link |
01:18:40.920
Yeah, we're also putting out other human beings, which is a very inefficient way to explore these other objects.
link |
01:18:52.920
What I'm really talking about in the sixth epoch, the universe wakes up.
link |
01:18:59.920
It's where we can spread our superintelligence throughout the universe.
link |
01:19:04.920
And that doesn't mean sending very soft, squishy creatures like humans.
link |
01:19:10.920
The universe wakes up.
link |
01:19:13.920
We would send intelligence masses of nanobots which can then go out and colonize these other parts of the universe.
link |
01:19:28.920
Do you think there's intelligent alien civilizations out there that our bots might meet?
link |
01:19:34.920
My hunch is no.
link |
01:19:38.920
Most people say yes, absolutely.
link |
01:19:41.920
It's too big.
link |
01:19:43.920
And they'll cite the Drake equation.
link |
01:19:46.920
And I think in singularities near, I have two analyses of the Drake equation, both with very reasonable assumptions.
link |
01:19:59.920
And one gives you thousands of advanced civilizations in each galaxy.
link |
01:20:06.920
And another one gives you one civilization, and we know of one.
link |
01:20:13.920
A lot of the analyses are forgetting the exponential growth of computation
link |
01:20:20.920
because we've gone from where the fastest way I could send a message to somebody was with a pony,
link |
01:20:29.920
which was what, like a century and a half ago, to the advanced civilization we have today.
link |
01:20:37.920
And if you accept what I've said, go forward a few decades.
link |
01:20:42.920
You can have absolutely fantastic amount of civilization compared to a pony, and that's in a couple hundred years.
link |
01:20:49.920
The speed and the scale of information transfer is growing exponentially in a blink of an eye.
link |
01:20:57.920
Now think about these other civilizations.
link |
01:21:01.920
They're going to be spread out at cosmic times.
link |
01:21:05.920
So if something is like ahead of us or behind us, it could be ahead of us or behind us by maybe millions of years,
link |
01:21:14.920
which isn't that much.
link |
01:21:16.920
I mean, the world is billions of years old, 14 billion or something.
link |
01:21:23.920
So even a thousand years, if two or three hundred years is enough to go from a pony to a fantastic amount of civilization,
link |
01:21:33.920
we would see that.
link |
01:21:35.920
So of other civilizations that have occurred, some might be behind us, but some might be ahead of us.
link |
01:21:43.920
If they're ahead of us, they're ahead of us by thousands, millions of years, and they would be so far beyond us.
link |
01:21:51.920
They would be doing galaxy wide engineering, but we don't see anything doing galaxy wide engineering.
link |
01:21:59.920
So either they don't exist or this very universe is a construction of an alien species.
link |
01:22:07.920
We're living inside a video game.
link |
01:22:11.920
Well, that's another explanation that yes, you've got some teenage kids and other civilization.
link |
01:22:18.920
Do you find compelling the simulation hypothesis as a thought experiment that we're living in a simulation?
link |
01:22:24.920
The universe is computational.
link |
01:22:28.920
So we are an example in a computational world.
link |
01:22:34.920
Therefore, it is a simulation.
link |
01:22:38.920
It doesn't necessarily mean an experiment by some high school kid in another world,
link |
01:22:44.920
but it nonetheless is taking place in a computational world and everything that's going on is basically a form of computation.
link |
01:22:57.920
So you really have to define what you mean by this whole world being a simulation.
link |
01:23:05.920
Well, then it's the teenager that makes the video game.
link |
01:23:11.920
You know, us humans with our current limited cognitive capability have strived to understand ourselves and we have created religions.
link |
01:23:23.920
We think of God, whatever that is, do you think God exists?
link |
01:23:31.920
And if so, who is God?
link |
01:23:34.920
I alluded to this before and we started out with lots of particles going around and there's nothing that represents love and creativity.
link |
01:23:52.920
And somehow we've gotten into a world where love actually exists and that has to do actually with consciousness because you can't have love without consciousness.
link |
01:24:02.920
So to me, that's God, the fact that we have something where love where you can be devoted to someone else and really feel that love.
link |
01:24:15.920
That's God.
link |
01:24:18.920
And if you look at the Old Testament, it was actually created by several different rabbinids in there.
link |
01:24:28.920
And I think they've identified three of them.
link |
01:24:33.920
One of them dealt with God as a person that you can make deals with and he gets angry and he wrecks vengeance on various people.
link |
01:24:47.920
But two of them actually talk about God as a symbol of love and peace and harmony and so forth.
link |
01:24:57.920
That's how they describe God.
link |
01:25:00.920
So that's my view of God, not as a person in the sky that you can make deals with.
link |
01:25:08.920
It's whatever the magic that goes from basic elements to things like consciousness and love.
link |
01:25:15.920
Do you think one of the things I find extremely beautiful and powerful is cellular automata, which you also touch on.
link |
01:25:23.920
Do you think whatever the heck happens in cellular automata where interesting, complicated objects emerge, God is in there too?
link |
01:25:32.920
The emergence of love in this seemingly primitive universe?
link |
01:25:37.920
Well, that's the goal of creating a replicant is that they would love you and you would love them.
link |
01:25:47.920
There wouldn't be much point of doing it if that didn't happen.
link |
01:25:51.920
But all of it, I guess what I'm saying about cellular automata is it's a primitive building blocks and they somehow create beautiful things.
link |
01:26:02.920
Is there some deep truth to that about how our universe works?
link |
01:26:07.920
Is the emergence from simple rules, beautiful complex objects can emerge?
link |
01:26:12.920
Is that the thing that made us as we went through all the six phases of reality?
link |
01:26:20.920
That's a good way to look at it.
link |
01:26:22.920
It just makes some point to the whole value of having a universe.
link |
01:26:30.920
Do you think about your own mortality? Are you afraid of it?
link |
01:26:35.920
Yes, but I keep going back to my idea of being able to expand human life quickly enough in advance of our getting there, longevity, escape velocity, which we're not quite at yet.
link |
01:26:57.920
But I think we're actually pretty close, particularly with, for example, doing simulated biology.
link |
01:27:05.920
I think we can probably get there within, say, by the end of this decade.
link |
01:27:10.920
And that's my goal.
link |
01:27:12.920
Do you hope to achieve the longevity, escape velocity? Do you hope to achieve immortality?
link |
01:27:19.920
Well, immortality is hard to say. I can't really come on your program saying, I've done it.
link |
01:27:25.920
I've achieved immortality because it's never forever.
link |
01:27:31.920
A long time. A long time of living well.
link |
01:27:34.920
But we'd like to actually advance human life expectancy, advance my life expectancy more than a year, every year.
link |
01:27:43.920
And I think we can get there within, by the end of this decade.
link |
01:27:47.920
How do you think we'd do it?
link |
01:27:49.920
Practical things, in transcend the nine steps to living well forever, your book, you describe just that.
link |
01:27:57.920
There's practical things like health, exercise, all those things.
link |
01:28:01.920
Yeah, I mean, we live in a body that doesn't last forever.
link |
01:28:07.920
There's no reason why it can't, though.
link |
01:28:10.920
And we're discovering things, I think, that will extend it.
link |
01:28:16.920
But you do have to deal with, I mean, I've got various issues.
link |
01:28:22.920
Went to Mexico 40 years ago, developed Salmonella.
link |
01:28:28.920
They created pancreatitis, which gave me a strange form of diabetes.
link |
01:28:36.920
It's not type one diabetes because it's an autoimmune disorder that destroys your pancreas.
link |
01:28:44.920
I don't have that.
link |
01:28:46.920
But it's also not type two diabetes because type two diabetes is your pancreas works fine,
link |
01:28:51.920
but your cells don't absorb the insulin well.
link |
01:28:55.920
I don't have that either.
link |
01:28:57.920
The pancreatitis I had partially damaged my pancreas, but it was a one time thing.
link |
01:29:05.920
It didn't continue.
link |
01:29:08.920
And I've learned now how to control it.
link |
01:29:11.920
So that's just something that I had to do in order to continue to exist.
link |
01:29:18.920
Since your particular biological system, you had to figure out a few hacks
link |
01:29:22.920
and the idea is that science would be able to do that much better, actually.
link |
01:29:26.920
Yeah.
link |
01:29:27.920
So I mean, I do spend a lot of time just tinkering with my own body to keep it going.
link |
01:29:33.920
So I do think I'll last till the end of this decade,
link |
01:29:37.920
and I think we'll achieve longevity escape velocity.
link |
01:29:40.920
I think that we'll start with people who are very diligent about this.
link |
01:29:45.920
Eventually it'll become sort of routine that people will be able to do it.
link |
01:29:50.920
So if you're talking about kids today or even people in their 20s or 30s,
link |
01:29:55.920
it's really not a very serious problem.
link |
01:30:00.920
I have had some discussions with relatives who were like almost 100
link |
01:30:07.920
and saying, well, we're working on it as quickly as possible,
link |
01:30:11.920
but I don't know if that's going to work.
link |
01:30:15.920
Is there a case, this is a difficult question,
link |
01:30:17.920
but is there a case to be made against living forever
link |
01:30:22.920
that a finite life, that mortality is a feature, not a bug,
link |
01:30:28.920
that living a shorter, so dying makes ice cream taste delicious,
link |
01:30:36.920
makes life intensely beautiful.
link |
01:30:39.920
Most people believe that way, except if you present a death of anybody
link |
01:30:47.920
they care about or love, they find that extremely depressing.
link |
01:30:54.920
And I know people who feel that way 20, 30, 40 years later,
link |
01:31:01.920
they still want them back.
link |
01:31:05.920
So I mean, death is not something to celebrate,
link |
01:31:10.920
but we've lived in a world where people just accept this,
link |
01:31:15.920
life is short, you see it all the time on TV, life's short,
link |
01:31:18.920
you have to take advantage of it,
link |
01:31:20.920
and nobody accepts the fact that you can actually go beyond normal lifetimes,
link |
01:31:26.920
but any time we talk about death or death of a person,
link |
01:31:30.920
even one death is a terrible tragedy.
link |
01:31:34.920
If you have somebody that lives to 100 years old,
link |
01:31:38.920
we still love them in return,
link |
01:31:42.920
and there's no limitation to that.
link |
01:31:46.920
In fact, these kinds of trends are going to provide greater and greater
link |
01:31:52.920
opportunity for everybody, even if we have more people.
link |
01:31:56.920
So let me ask about an alien species
link |
01:31:59.920
or a super intelligent AI 500 years from now that will look back.
link |
01:32:04.920
And remember Ray Kurzweil, version zero,
link |
01:32:09.920
before the replicants spread.
link |
01:32:12.920
How do you hope they remember you?
link |
01:32:15.920
In a Hitchhiker's Guide to the Galaxy summary of Ray Kurzweil,
link |
01:32:20.920
what do you hope your legacy is?
link |
01:32:22.920
Well, I mean, I do hope to be around, so that's...
link |
01:32:25.920
Some version of you, yes.
link |
01:32:27.920
So...
link |
01:32:29.920
Do you think you'll be the same person around?
link |
01:32:31.920
I mean, am I the same person I was when I was 20 or 10?
link |
01:32:36.920
You would be the same person in that same way,
link |
01:32:39.920
but yes, we're different.
link |
01:32:41.920
We're different.
link |
01:32:43.920
All we have of that, all you have of that person is your memories,
link |
01:32:47.920
which are probably distorted in some way.
link |
01:32:52.920
Maybe you just remember the good parts,
link |
01:32:55.920
depending on your psyche.
link |
01:32:57.920
You might focus on the bad parts,
link |
01:32:59.920
you might focus on the good parts.
link |
01:33:02.920
Right, but...
link |
01:33:04.920
I mean, I'd still have a relationship to the way I was when I was younger.
link |
01:33:11.920
How will you and the other super intelligent AIs remember you of today,
link |
01:33:15.920
from 500 years ago?
link |
01:33:18.920
What do you hope to be remembered by this version of you,
link |
01:33:22.920
before the singularity?
link |
01:33:24.920
Well, I think it's expressed well in my books,
link |
01:33:27.920
trying to create some new realities that people will accept.
link |
01:33:31.920
I mean, that's something that gives me great pleasure,
link |
01:33:38.920
and greater insight into what makes humans valuable.
link |
01:33:48.920
I'm not the only person who's tempted to comment on that, but...
link |
01:33:56.920
And optimism that permeates your work.
link |
01:33:59.920
Optimism about the future.
link |
01:34:02.920
It's ultimately that optimism paves the way for building a better future.
link |
01:34:06.920
Yeah, I agree with that.
link |
01:34:09.920
So you asked your dad about the meaning of life,
link |
01:34:14.920
and he said, Love, let me ask you the same question.
link |
01:34:18.920
What's the meaning of life?
link |
01:34:20.920
Why are we here?
link |
01:34:22.920
This beautiful journey they were on in Phase 4,
link |
01:34:27.920
reaching for Phase 5 of this evolution and information processing.
link |
01:34:33.920
Why?
link |
01:34:34.920
Well, I think I'd give the same answers as my father.
link |
01:34:41.920
Because if there was no love, and we didn't care about anybody,
link |
01:34:45.920
there'd be no point existing.
link |
01:34:48.920
Love is the meaning of life.
link |
01:34:50.920
The AI version of your dad had a good point.
link |
01:34:54.920
Well, I think that's a beautiful way to end it.
link |
01:34:57.920
Right? Thank you for your work. Thank you for being who you are.
link |
01:35:00.920
Thank you for dreaming about a beautiful future and creating it along the way.
link |
01:35:05.920
And thank you so much for spending a really valuable time with me today.
link |
01:35:10.920
This was awesome.
link |
01:35:11.920
Well, this is my pleasure, and you have some great insights,
link |
01:35:15.920
both into me and into humanity as well.
link |
01:35:18.920
So I appreciate that.
link |
01:35:20.920
Thanks for listening to this conversation with Ray Coorswell.
link |
01:35:23.920
To support this podcast, please check out our sponsors in the description.
link |
01:35:27.920
And now, let me leave you with some words from Isaac Asimov.
link |
01:35:31.920
It is change, continuous change, inevitable change,
link |
01:35:36.920
that is the dominant factor in society today.
link |
01:35:40.920
No sensible decision could be made any longer without taking into account
link |
01:35:44.920
not only the world as it is, but the world as it will be.
link |
01:35:48.920
This, in turn, means that our statesmen, our businessmen, our everyman
link |
01:35:54.920
must take on a science fictional way of thinking.
link |
01:35:58.920
Thank you for listening, and hope to see you next time.