back to index

Ray Kurzweil: Singularity, Superintelligence, and Immortality | Lex Fridman Podcast #321


small model | large model

link |
00:00:00.000
By the time he gets to 2045,
link |
00:00:02.800
we'll be able to multiply our intelligence
link |
00:00:05.320
many millions fold.
link |
00:00:07.680
And it's just very hard to imagine what that will be like.
link |
00:00:13.560
The following is a conversation with Ray Kurzweil,
link |
00:00:16.840
author, inventor, and futurist,
link |
00:00:19.480
who has an optimistic view of our future
link |
00:00:22.280
as a human civilization,
link |
00:00:24.320
predicting that exponentially improving technologies
link |
00:00:27.280
will take us to a point of a singularity
link |
00:00:29.880
beyond which superintelligent artificial intelligence
link |
00:00:33.480
will transform our world in nearly unimaginable ways.
link |
00:00:38.380
18 years ago, in the book Singularity is Near,
link |
00:00:41.280
he predicted that the onset of the singularity
link |
00:00:44.000
will happen in the year 2045.
link |
00:00:47.360
He still holds to this prediction and estimate.
link |
00:00:50.800
In fact, he's working on a new book on this topic
link |
00:00:53.440
that will hopefully be out next year.
link |
00:00:56.520
This is the Lex Friedman podcast.
link |
00:00:58.320
To support it, please check out our sponsors
link |
00:01:00.360
in the description.
link |
00:01:01.640
And now, dear friends, here's Ray Kurzweil.
link |
00:01:06.360
In your 2005 book titled The Singularity is Near,
link |
00:01:10.960
you predicted that the singularity will happen in 2045.
link |
00:01:15.400
So now, 18 years later, do you still estimate
link |
00:01:18.460
that the singularity will happen on 2045?
link |
00:01:22.480
And maybe first, what is the singularity,
link |
00:01:24.960
the technological singularity, and when will it happen?
link |
00:01:27.760
Singularity is where computers really change our view
link |
00:01:31.640
of what's important and change who we are.
link |
00:01:35.840
But we're getting close to some salient things
link |
00:01:39.560
that will change who we are.
link |
00:01:42.800
A key thing is 2029,
link |
00:01:45.680
when computers will pass the Turing test.
link |
00:01:50.120
And there's also some controversy
link |
00:01:51.520
whether the Turing test is valid.
link |
00:01:53.680
I believe it is.
link |
00:01:55.080
Most people do believe that,
link |
00:01:57.920
but there's some controversy about that.
link |
00:01:59.680
But Stanford got very alarmed at my prediction about 2029.
link |
00:02:06.520
I made this in 1999 in my book.
link |
00:02:10.520
The Age of Spiritual Machines.
link |
00:02:12.120
Right.
link |
00:02:12.960
And then you repeated the prediction in 2005.
link |
00:02:15.600
In 2005.
link |
00:02:16.600
Yeah.
link |
00:02:17.520
So they held an international conference,
link |
00:02:19.480
you might have been aware of it,
link |
00:02:20.800
of AI experts in 1999 to assess this view.
link |
00:02:26.620
So people gave different predictions,
link |
00:02:29.580
and they took a poll.
link |
00:02:30.840
It was really the first time that AI experts worldwide
link |
00:02:34.240
were polled on this prediction.
link |
00:02:37.720
And the average poll was 100 years.
link |
00:02:41.420
20% believed it would never happen.
link |
00:02:44.320
And that was the view in 1999.
link |
00:02:48.120
80% believed it would happen,
link |
00:02:50.640
but not within their lifetimes.
link |
00:02:53.200
There's been so many advances in AI
link |
00:02:56.920
that the poll of AI experts has come down over the years.
link |
00:03:01.840
So a year ago, something called Meticulous,
link |
00:03:05.440
which you may be aware of,
link |
00:03:07.080
assesses different types of experts on the future.
link |
00:03:11.560
They again assessed what AI experts then felt.
link |
00:03:16.440
And they were saying 2042.
link |
00:03:18.940
For the Turing test.
link |
00:03:20.440
For the Turing test.
link |
00:03:22.440
So it's coming down.
link |
00:03:23.560
And I was still saying 2029.
link |
00:03:26.320
A few weeks ago, they again did another poll,
link |
00:03:30.200
and it was 2030.
link |
00:03:32.960
So AI experts now basically agree with me.
link |
00:03:37.920
I haven't changed at all, I've stayed with 2029.
link |
00:03:42.820
And AI experts now agree with me,
link |
00:03:44.520
but they didn't agree at first.
link |
00:03:46.840
So Alan Turing formulated the Turing test,
link |
00:03:50.120
and...
link |
00:03:50.960
Right, now, what he said was very little about it.
link |
00:03:54.480
I mean, the 1950 paper
link |
00:03:55.920
where he had articulated the Turing test,
link |
00:03:59.440
there's like a few lines that talk about the Turing test.
link |
00:04:06.840
And it really wasn't very clear how to administer it.
link |
00:04:12.040
And he said if they did it in like 15 minutes,
link |
00:04:16.520
that would be sufficient,
link |
00:04:17.680
which I don't really think is the case.
link |
00:04:20.600
These large language models now,
link |
00:04:22.960
some people are convinced by it already.
link |
00:04:25.560
I mean, you can talk to it and have a conversation with it.
link |
00:04:28.440
You can actually talk to it for hours.
link |
00:04:31.760
So it requires a little more depth.
link |
00:04:35.360
There's some problems with large language models
link |
00:04:38.120
which we can talk about.
link |
00:04:41.840
But some people are convinced by the Turing test.
link |
00:04:46.460
Now, if somebody passes the Turing test,
link |
00:04:50.160
what are the implications of that?
link |
00:04:52.160
Does that mean that they're sentient,
link |
00:04:53.720
that they're conscious or not?
link |
00:04:56.280
It's not necessarily clear what the implications are.
link |
00:05:00.880
Anyway, I believe 2029, that's six, seven years from now,
link |
00:05:07.640
we'll have something that passes the Turing test
link |
00:05:10.360
and a valid Turing test,
link |
00:05:12.480
meaning it goes for hours, not just a few minutes.
link |
00:05:15.320
Can you speak to that a little bit?
link |
00:05:16.600
What is your formulation of the Turing test?
link |
00:05:21.160
You've proposed a very difficult version
link |
00:05:23.180
of the Turing test, so what does that look like?
link |
00:05:25.420
Basically, it's just to assess it over several hours
link |
00:05:30.760
and also have a human judge that's fairly sophisticated
link |
00:05:36.440
on what computers can do and can't do.
link |
00:05:40.800
If you take somebody who's not that sophisticated
link |
00:05:43.800
or even an average engineer,
link |
00:05:48.360
they may not really assess various aspects of it.
link |
00:05:52.080
So you really want the human to challenge the system.
link |
00:05:55.680
Exactly, exactly.
link |
00:05:57.040
On its ability to do things
link |
00:05:58.520
like common sense reasoning, perhaps.
link |
00:06:00.800
That's actually a key problem with large language models.
link |
00:06:04.680
They don't do these kinds of tests
link |
00:06:08.080
that would involve assessing chains of reasoning,
link |
00:06:17.400
but you can lose track of that.
link |
00:06:18.960
If you talk to them,
link |
00:06:20.200
they actually can talk to you pretty well
link |
00:06:22.760
and you can be convinced by it,
link |
00:06:24.840
but it's somebody that would really convince you
link |
00:06:27.400
that it's a human, whatever that takes.
link |
00:06:32.200
Maybe it would take days or weeks,
link |
00:06:34.800
but it would really convince you that it's human.
link |
00:06:40.880
Large language models can appear that way.
link |
00:06:45.320
You can read conversations and they appear pretty good.
link |
00:06:49.760
There are some problems with it.
link |
00:06:52.260
It doesn't do math very well.
link |
00:06:55.000
You can ask how many legs did 10 elephants have
link |
00:06:58.160
and they'll tell you, well, okay,
link |
00:07:00.020
each elephant has four legs
link |
00:07:01.440
and it's 10 elephants, so it's 40 legs.
link |
00:07:03.700
And you go, okay, that's pretty good.
link |
00:07:05.840
How many legs do 11 elephants have?
link |
00:07:07.960
And they don't seem to understand the question.
link |
00:07:11.520
Do all humans understand that question?
link |
00:07:14.160
No, that's the key thing.
link |
00:07:15.880
I mean, how advanced a human do you want it to be?
link |
00:07:19.440
But we do expect a human
link |
00:07:21.360
to be able to do multi chain reasoning,
link |
00:07:24.840
to be able to take a few facts
link |
00:07:26.320
and put them together, not perfectly.
link |
00:07:29.840
And we see that in a lot of polls
link |
00:07:32.800
that people don't do that perfectly at all.
link |
00:07:39.220
So it's not very well defined,
link |
00:07:42.020
but it's something where it really would convince you
link |
00:07:44.320
that it's a human.
link |
00:07:45.600
Is your intuition that large language models
link |
00:07:48.840
will not be solely the kind of system
link |
00:07:52.320
that passes the Turing test in 2029?
link |
00:07:55.600
Do we need something else?
link |
00:07:56.800
No, I think it will be a large language model,
link |
00:07:58.720
but they have to go beyond what they're doing now.
link |
00:08:02.960
I think we're getting there.
link |
00:08:05.760
And another key issue is if somebody
link |
00:08:09.240
actually passes the Turing test validly,
link |
00:08:12.200
I would believe they're conscious.
link |
00:08:13.640
And then not everybody would say that.
link |
00:08:15.000
It's okay, we can pass the Turing test,
link |
00:08:17.440
but we don't really believe that it's conscious.
link |
00:08:20.080
That's a whole nother issue.
link |
00:08:23.120
But if it really passes the Turing test,
link |
00:08:24.920
I would believe that it's conscious.
link |
00:08:26.720
But I don't believe that of large language models today.
link |
00:08:32.760
If it appears to be conscious,
link |
00:08:35.520
that's as good as being conscious, at least for you,
link |
00:08:38.240
in some sense.
link |
00:08:40.700
I mean, consciousness is not something that's scientific.
link |
00:08:46.640
I mean, I believe you're conscious,
link |
00:08:49.760
but it's really just a belief,
link |
00:08:51.100
and we believe that about other humans
link |
00:08:52.800
that at least appear to be conscious.
link |
00:08:57.400
When you go outside of shared human assumption,
link |
00:09:01.720
like are animals conscious?
link |
00:09:04.520
Some people believe they're not conscious.
link |
00:09:06.200
Some people believe they are conscious.
link |
00:09:08.680
And would a machine that acts just like a human be conscious?
link |
00:09:14.520
I mean, I believe it would be.
link |
00:09:17.040
But that's really a philosophical belief.
link |
00:09:20.800
You can't prove it.
link |
00:09:22.720
I can't take an entity and prove that it's conscious.
link |
00:09:25.480
There's nothing that you can do
link |
00:09:27.280
that would indicate that.
link |
00:09:30.360
It's like saying a piece of art is beautiful.
link |
00:09:32.780
You can say it.
link |
00:09:35.000
Multiple people can experience a piece of art as beautiful,
link |
00:09:39.300
but you can't prove it.
link |
00:09:41.320
But it's also an extremely important issue.
link |
00:09:44.840
I mean, imagine if you had something
link |
00:09:47.040
where nobody's conscious.
link |
00:09:49.140
The world may as well not exist.
link |
00:09:55.660
And so some people, like say Marvin Minsky,
link |
00:10:02.620
said, well, consciousness is not logical,
link |
00:10:05.940
it's not scientific, and therefore we should dismiss it,
link |
00:10:08.380
and any talk about consciousness is just not to be believed.
link |
00:10:15.500
But when he actually engaged with somebody
link |
00:10:18.500
who was conscious, he actually acted
link |
00:10:20.660
as if they were conscious.
link |
00:10:22.620
He didn't ignore that.
link |
00:10:24.260
He acted as if consciousness does matter.
link |
00:10:26.860
Exactly.
link |
00:10:28.180
Whereas he said it didn't matter.
link |
00:10:30.500
Well, that's Marvin Minsky.
link |
00:10:31.780
Yeah.
link |
00:10:32.620
He's full of contradictions.
link |
00:10:34.060
But that's true of a lot of people as well.
link |
00:10:37.660
But to you, consciousness matters.
link |
00:10:39.620
But to me, it's very important.
link |
00:10:42.160
But I would say it's not a scientific issue.
link |
00:10:45.640
It's a philosophical issue.
link |
00:10:49.240
And people have different views.
link |
00:10:50.720
Some people believe that anything
link |
00:10:52.800
that makes a decision is conscious.
link |
00:10:54.520
So your light switch is conscious.
link |
00:10:56.760
Its level of consciousness is low,
link |
00:10:59.400
not very interesting, but that's a consciousness.
link |
00:11:05.120
So a computer that makes a more interesting decision
link |
00:11:09.120
is still not at human levels,
link |
00:11:10.440
but it's also conscious and at a higher level
link |
00:11:12.560
than your light switch.
link |
00:11:13.720
So that's one view.
link |
00:11:17.360
There's many different views of what consciousness is.
link |
00:11:20.080
So if a system passes the Turing test,
link |
00:11:24.600
it's not scientific, but in issues of philosophy,
link |
00:11:30.000
things like ethics start to enter the picture.
link |
00:11:32.600
Do you think there would be,
link |
00:11:35.500
we would start contending as a human species
link |
00:11:39.920
about the ethics of turning off such a machine?
link |
00:11:42.840
Yeah, I mean, that's definitely come up.
link |
00:11:47.400
Hasn't come up in reality yet.
link |
00:11:49.600
Yet.
link |
00:11:50.560
But I'm talking about 2029.
link |
00:11:52.400
It's not that many years from now.
link |
00:11:56.080
So what are our obligations to it?
link |
00:11:59.960
It has a different, I mean, a computer that's conscious,
link |
00:12:03.240
it has a little bit different connotations than a human.
link |
00:12:08.240
We have a continuous consciousness.
link |
00:12:15.600
We're in an entity that does not last forever.
link |
00:12:22.080
Now, actually, a significant portion of humans still exist
link |
00:12:27.400
and are therefore still conscious.
link |
00:12:31.760
But anybody who is over a certain age
link |
00:12:34.880
doesn't exist anymore.
link |
00:12:37.160
That wouldn't be true of a computer program.
link |
00:12:40.320
You could completely turn it off
link |
00:12:42.000
and a copy of it could be stored and you could recreate it.
link |
00:12:46.120
And so it has a different type of validity.
link |
00:12:51.160
You could actually take it back in time.
link |
00:12:52.920
You could eliminate its memory and have it go over again.
link |
00:12:55.840
I mean, it has a different kind of connotation
link |
00:12:59.800
than humans do.
link |
00:13:01.800
Well, perhaps it can do the same thing with humans.
link |
00:13:04.400
It's just that we don't know how to do that yet.
link |
00:13:06.880
It's possible that we figure out all of these things
link |
00:13:09.400
on the machine first.
link |
00:13:12.320
But that doesn't mean the machine isn't conscious.
link |
00:13:15.480
I mean, if you look at the way people react,
link |
00:13:17.640
say, 3CPO or other machines that are conscious in movies,
link |
00:13:25.000
they don't actually present how it's conscious,
link |
00:13:26.740
but we see that they are a machine
link |
00:13:30.120
and people will believe that they are conscious
link |
00:13:33.280
and they'll actually worry about it
link |
00:13:34.640
if they get into trouble and so on.
link |
00:13:37.480
So 2029 is going to be the first year
link |
00:13:40.840
when a major thing happens.
link |
00:13:43.440
Right.
link |
00:13:44.280
And that will shake our civilization
link |
00:13:46.520
to start to consider the role of AI in this world.
link |
00:13:50.280
Yes and no.
link |
00:13:51.120
I mean, this one guy at Google claimed
link |
00:13:54.560
that the machine was conscious.
link |
00:13:58.440
But that's just one person.
link |
00:14:00.160
Right.
link |
00:14:01.000
When it starts to happen to scale.
link |
00:14:03.080
Well, that's exactly right because most people
link |
00:14:06.320
have not taken that position.
link |
00:14:07.760
I don't take that position.
link |
00:14:08.940
I mean, I've used different things like this
link |
00:14:17.240
and they don't appear to me to be conscious.
link |
00:14:20.500
As we eliminate various problems
link |
00:14:22.840
of these large language models,
link |
00:14:26.960
more and more people will accept that they're conscious.
link |
00:14:30.480
So when we get to 2029, I think a large fraction
link |
00:14:35.760
of people will believe that they're conscious.
link |
00:14:39.080
So it's not gonna happen all at once.
link |
00:14:42.440
I believe it will actually happen gradually
link |
00:14:44.360
and it's already started to happen.
link |
00:14:47.280
And so that takes us one step closer to the singularity.
link |
00:14:52.360
Another step then is in the 2030s
link |
00:14:55.560
when we can actually connect our neocortex,
link |
00:14:59.800
which is where we do our thinking, to computers.
link |
00:15:04.880
And I mean, just as this actually gains a lot
link |
00:15:09.280
to being connected to computers
link |
00:15:12.200
that will amplify its abilities,
link |
00:15:15.360
I mean, if this did not have any connection,
link |
00:15:17.400
it would be pretty stupid.
link |
00:15:19.360
It could not answer any of your questions.
link |
00:15:21.860
If you're just listening to this, by the way,
link |
00:15:24.400
Ray's holding up the all powerful smartphone.
link |
00:15:29.400
So we're gonna do that directly from our brains.
link |
00:15:33.520
I mean, these are pretty good.
link |
00:15:35.040
These already have amplified our intelligence.
link |
00:15:37.720
I'm already much smarter than I would otherwise be
link |
00:15:40.040
if I didn't have this.
link |
00:15:42.600
Because I remember my first book,
link |
00:15:44.240
The Age of Intelligent Machines,
link |
00:15:49.060
there was no way to get information from computers.
link |
00:15:52.080
I actually would go to a library, find a book,
link |
00:15:55.400
find the page that had an information I wanted,
link |
00:15:58.440
and I'd go to the copier,
link |
00:15:59.920
and my most significant information tool
link |
00:16:04.360
was a roll of quarters where I could feed the copier.
link |
00:16:08.480
So we're already greatly advanced
link |
00:16:11.400
that we have these things.
link |
00:16:13.280
There's a few problems with it.
link |
00:16:15.460
First of all, I constantly put it down,
link |
00:16:17.280
and I don't remember where I put it.
link |
00:16:19.680
I've actually never lost it.
link |
00:16:21.220
But you have to find it, and then you have to turn it on.
link |
00:16:26.080
So there's a certain amount of steps.
link |
00:16:28.160
It would actually be quite useful
link |
00:16:30.100
if someone would just listen to your conversation
link |
00:16:33.440
and say, oh, that's so and so actress,
link |
00:16:38.920
and tell you what you're talking about.
link |
00:16:41.160
So going from active to passive,
link |
00:16:43.160
where it just permeates your whole life.
link |
00:16:46.240
Yeah, exactly.
link |
00:16:47.280
The way your brain does when you're awake.
link |
00:16:49.560
Your brain is always there.
link |
00:16:51.220
Right.
link |
00:16:52.060
That's something that could actually
link |
00:16:53.800
just about be done today,
link |
00:16:55.840
where we'd listen to your conversation,
link |
00:16:57.400
understand what you're saying,
link |
00:16:58.600
understand what you're not missing,
link |
00:17:01.840
and give you that information.
link |
00:17:04.520
But another step is to actually go inside your brain.
link |
00:17:09.720
And there are some prototypes
link |
00:17:12.740
where you can connect your brain.
link |
00:17:15.280
They actually don't have the amount
link |
00:17:17.040
of bandwidth that we need.
link |
00:17:19.160
They can work, but they work fairly slowly.
link |
00:17:21.940
So if it actually would connect to your neocortex,
link |
00:17:26.160
and the neocortex, which I describe
link |
00:17:30.180
in How to Create a Mind,
link |
00:17:33.000
the neocortex is actually,
link |
00:17:36.700
it has different levels,
link |
00:17:38.180
and as you go up the levels,
link |
00:17:39.980
it's kind of like a pyramid.
link |
00:17:41.780
The top level is fairly small,
link |
00:17:44.340
and that's the level where you wanna connect
link |
00:17:47.820
these brain extenders.
link |
00:17:50.140
And so I believe that will happen in the 2030s.
link |
00:17:58.100
So just the way this is greatly amplified
link |
00:18:01.580
by being connected to the cloud,
link |
00:18:04.420
we can connect our own brain to the cloud,
link |
00:18:07.420
and just do what we can do by using this machine.
link |
00:18:14.260
Do you think it would look like
link |
00:18:15.660
the brain computer interface of like Neuralink?
link |
00:18:18.920
So would it be?
link |
00:18:19.760
Well, Neuralink, it's an attempt to do that.
link |
00:18:22.500
It doesn't have the bandwidth that we need.
link |
00:18:26.300
Yet, right?
link |
00:18:27.660
Right, but I think,
link |
00:18:30.320
I mean, they're gonna get permission for this
link |
00:18:31.980
because there are a lot of people
link |
00:18:33.160
who absolutely need it because they can't communicate.
link |
00:18:36.660
I know a couple people like that
link |
00:18:38.420
who have ideas and they cannot,
link |
00:18:42.660
they cannot move their muscles and so on.
link |
00:18:44.580
They can't communicate.
link |
00:18:45.800
And so for them, this would be very valuable,
link |
00:18:52.040
but we could all use it.
link |
00:18:54.820
Basically, it'd be,
link |
00:18:59.000
turn us into something that would be like we have a phone,
link |
00:19:02.520
but it would be in our minds.
link |
00:19:05.120
It would be kind of instantaneous.
link |
00:19:07.360
And maybe communication between two people
link |
00:19:09.440
would not require this low bandwidth mechanism of language.
link |
00:19:14.080
Yes, exactly.
link |
00:19:15.640
We don't know what that would be,
link |
00:19:17.280
although we do know that computers can share information
link |
00:19:22.400
like language instantly.
link |
00:19:24.640
They can share many, many books in a second.
link |
00:19:28.880
So we could do that as well.
link |
00:19:31.200
If you look at what our brain does,
link |
00:19:34.240
it actually can manipulate different parameters.
link |
00:19:39.120
So we talk about these large language models.
link |
00:19:46.560
I mean, I had written that
link |
00:19:51.520
it requires a certain amount of information
link |
00:19:55.000
in order to be effective
link |
00:19:58.600
and that we would not see AI really being effective
link |
00:20:01.920
until it got to that level.
link |
00:20:04.400
And we had large language models
link |
00:20:06.400
that were like 10 billion bytes, didn't work very well.
link |
00:20:09.600
They finally got to a hundred billion bytes
link |
00:20:11.680
and now they work fairly well.
link |
00:20:13.440
And now we're going to a trillion bytes.
link |
00:20:16.280
If you say lambda has a hundred billion bytes,
link |
00:20:22.520
what does that mean?
link |
00:20:23.520
Well, what if you had something that had one byte,
link |
00:20:27.160
one parameter, maybe you wanna tell
link |
00:20:30.000
whether or not something's an elephant or not.
link |
00:20:33.960
And so you put in something that would detect its trunk.
link |
00:20:37.680
If it has a trunk, it's an elephant.
link |
00:20:39.160
If it doesn't have a trunk, it's not an elephant.
link |
00:20:41.680
That would work fairly well.
link |
00:20:44.400
There's a few problems with it.
link |
00:20:47.440
And it really wouldn't be able to tell what a trunk is,
link |
00:20:49.720
but anyway.
link |
00:20:50.640
And maybe other things other than elephants have trunks,
link |
00:20:54.120
you might get really confused.
link |
00:20:55.560
Yeah, exactly.
link |
00:20:56.960
I'm not sure which animals have trunks,
link |
00:20:58.800
but how do you define a trunk?
link |
00:21:02.400
But yeah, that's one parameter.
link |
00:21:04.960
You can do okay.
link |
00:21:06.400
So these things have a hundred billion parameters.
link |
00:21:08.760
So they're able to deal with very complex issues.
link |
00:21:12.200
All kinds of trunks.
link |
00:21:14.000
Human beings actually have a little bit more than that,
link |
00:21:16.240
but they're getting to the point
link |
00:21:17.960
where they can emulate humans.
link |
00:21:22.400
If we were able to connect this to our neocortex,
link |
00:21:27.400
we would basically add more of these abilities
link |
00:21:33.400
to make distinctions,
link |
00:21:35.400
and it could ultimately be much smarter
link |
00:21:37.600
and also be attached to information
link |
00:21:39.680
that we feel is reliable.
link |
00:21:43.800
So that's where we're headed.
link |
00:21:45.240
So you think that there will be a merger in the 30s,
link |
00:21:49.120
an increasing amount of merging
link |
00:21:50.880
between the human brain and the AI brain?
link |
00:21:55.880
Exactly.
link |
00:21:57.640
And the AI brain is really an emulation of human beings.
link |
00:22:02.280
I mean, that's why we're creating them,
link |
00:22:04.480
because human beings act the same way,
link |
00:22:07.200
and this is basically to amplify them.
link |
00:22:09.600
I mean, this amplifies our brain.
link |
00:22:13.800
It's a little bit clumsy to interact with,
link |
00:22:15.560
but it definitely is way beyond what we had 15 years ago.
link |
00:22:21.840
But the implementation becomes different,
link |
00:22:23.520
just like a bird versus the airplane,
link |
00:22:26.680
even though the AI brain is an emulation,
link |
00:22:30.600
it starts adding features we might not otherwise have,
link |
00:22:34.360
like ability to consume a huge amount
link |
00:22:36.280
of information quickly,
link |
00:22:38.520
like look up thousands of Wikipedia articles in one take.
link |
00:22:43.080
Exactly.
link |
00:22:44.200
I mean, we can get, for example,
link |
00:22:46.320
issues like simulated biology,
link |
00:22:48.120
where it can simulate many different things at once.
link |
00:22:56.760
We already had one example of simulated biology,
link |
00:22:59.600
which is the Moderna vaccine.
link |
00:23:04.560
And that's gonna be now
link |
00:23:06.600
the way in which we create medications.
link |
00:23:11.160
But they were able to simulate
link |
00:23:13.000
what each example of an mRNA would do to a human being,
link |
00:23:17.760
and they were able to simulate that quite reliably.
link |
00:23:21.400
And we actually simulated billions
link |
00:23:23.960
of different mRNA sequences,
link |
00:23:27.040
and they found the ones that were the best,
link |
00:23:29.000
and they created the vaccine.
link |
00:23:31.040
And they did, and talked about doing that quickly,
link |
00:23:34.120
they did that in two days.
link |
00:23:36.280
Now, how long would a human being take
link |
00:23:37.880
to simulate billions of different mRNA sequences?
link |
00:23:41.000
I don't know that we could do it at all,
link |
00:23:42.840
but it would take many years.
link |
00:23:45.800
They did it in two days, and one of the reasons
link |
00:23:50.000
that people didn't like vaccines
link |
00:23:53.720
is because it was done too quickly,
link |
00:23:55.520
it was done too fast.
link |
00:23:58.200
And they actually included the time it took to test it out,
link |
00:24:01.280
which was 10 months, so they figured,
link |
00:24:03.600
okay, it took 10 months to create this.
link |
00:24:06.320
Actually, it took us two days.
link |
00:24:09.080
And we also will be able to ultimately do the tests
link |
00:24:11.880
in a few days as well.
link |
00:24:14.200
Oh, because we can simulate how the body will respond to it.
link |
00:24:16.600
Yeah, that's a little bit more complicated
link |
00:24:19.120
because the body has a lot of different elements,
link |
00:24:22.920
and we have to simulate all of that,
link |
00:24:25.400
but that's coming as well.
link |
00:24:27.520
So ultimately, we could create it in a few days
link |
00:24:30.240
and then test it in a few days, and it would be done.
link |
00:24:34.040
And we can do that with every type
link |
00:24:35.960
of medical insufficiency that we have.
link |
00:24:40.240
So curing all diseases, improving certain functions
link |
00:24:46.680
of the body, supplements, drugs for recreation,
link |
00:24:53.120
for health, for performance, for productivity,
link |
00:24:56.080
all that kind of stuff.
link |
00:24:56.920
Well, that's where we're headed,
link |
00:24:58.040
because I mean, right now we have a very inefficient way
link |
00:25:00.640
of creating these new medications.
link |
00:25:04.560
But we've already shown it, and the Moderna vaccine
link |
00:25:07.120
is actually the best of the vaccines we've had,
link |
00:25:12.520
and it literally took two days to create.
link |
00:25:16.280
And we'll get to the point
link |
00:25:17.200
where we can test it out also quickly.
link |
00:25:20.160
Are you impressed by AlphaFold
link |
00:25:22.280
and the solution to the protein folding,
link |
00:25:25.760
which essentially is simulating, modeling
link |
00:25:30.040
this primitive building block of life,
link |
00:25:33.080
which is a protein, and its 3D shape?
link |
00:25:36.080
It's pretty remarkable that they can actually predict
link |
00:25:39.040
what the 3D shape of these things are,
link |
00:25:42.000
but they did it with the same type of neural net
link |
00:25:45.920
that won, for example, the Go test.
link |
00:25:51.240
So it's all the same.
link |
00:25:52.720
It's all the same.
link |
00:25:53.560
All the same approaches.
link |
00:25:54.400
They took that same thing and just changed the rules
link |
00:25:57.080
to chess, and within a couple of days,
link |
00:26:01.640
it now played a master level of chess
link |
00:26:03.960
greater than any human being.
link |
00:26:09.160
And the same thing then worked for AlphaFold,
link |
00:26:13.480
which no human had done.
link |
00:26:14.800
I mean, human beings could do,
link |
00:26:16.800
the best humans could maybe do 15, 20%
link |
00:26:22.640
of figuring out what the shape would be.
link |
00:26:25.840
And after a few takes, it ultimately did just about 100%.
link |
00:26:30.840
100%.
link |
00:26:32.560
Do you still think the singularity will happen in 2045?
link |
00:26:37.560
And what does that look like?
link |
00:26:40.760
Once we can amplify our brain with computers directly,
link |
00:26:46.600
which will happen in the 2030s,
link |
00:26:48.080
that's gonna keep growing.
link |
00:26:49.800
That's another whole theme,
link |
00:26:51.040
which is the exponential growth of computing power.
link |
00:26:54.960
Yeah, so looking at price performance of computation
link |
00:26:57.520
from 1939 to 2021.
link |
00:26:59.800
Right, so that starts with the very first computer
link |
00:27:02.920
actually created by a German during World War II.
link |
00:27:06.560
You might have thought that that might be significant,
link |
00:27:09.440
but actually the Germans didn't think computers
link |
00:27:12.880
were significant, and they completely rejected it.
link |
00:27:16.640
The second one is also the ZUSA 2.
link |
00:27:20.360
And by the way, we're looking at a plot
link |
00:27:22.240
with the X axis being the year from 1935 to 2025.
link |
00:27:27.240
And on the Y axis in log scale
link |
00:27:30.920
is computation per second per constant dollar.
link |
00:27:34.680
So dollar normalized inflation.
link |
00:27:37.720
And it's growing linearly on the log scale,
link |
00:27:40.200
which means it's growing exponentially.
link |
00:27:41.880
The third one was the British computer,
link |
00:27:44.480
which the Allies did take very seriously.
link |
00:27:47.720
And it cracked the German code
link |
00:27:51.780
and enables the British to win the Battle of Britain,
link |
00:27:55.520
which otherwise absolutely would not have happened
link |
00:27:57.720
if they hadn't cracked the code using that computer.
link |
00:28:02.120
But that's an exponential graph.
link |
00:28:03.640
So a straight line on that graph is exponential growth.
link |
00:28:07.300
And you see 80 years of exponential growth.
link |
00:28:11.600
And I would say about every five years,
link |
00:28:15.200
and this happened shortly before the pandemic,
link |
00:28:18.280
people saying, well, they call it Moore's law,
link |
00:28:20.680
which is not the correct, because that's not all intel.
link |
00:28:25.480
In fact, this started decades before intel was even created.
link |
00:28:29.680
It wasn't with transistors formed into a grid.
link |
00:28:34.140
So it's not just transistor count or transistor size.
link |
00:28:37.280
Right, it started with relays, then went to vacuum tubes,
link |
00:28:43.200
then went to individual transistors,
link |
00:28:46.720
and then to integrated circuits.
link |
00:28:51.080
And integrated circuits actually starts
link |
00:28:54.000
like in the middle of this graph.
link |
00:28:56.520
And it has nothing to do with intel.
link |
00:28:58.760
Intel actually was a key part of this.
link |
00:29:02.880
But a few years ago, they stopped making the fastest chips.
link |
00:29:08.960
But if you take the fastest chip of any technology
link |
00:29:12.800
in that year, you get this kind of graph.
link |
00:29:16.640
And it's definitely continuing for 80 years.
link |
00:29:19.760
So you don't think Moore's law, broadly defined, is dead.
link |
00:29:24.840
It's been declared dead multiple times throughout this process.
link |
00:29:29.280
I don't like the term Moore's law,
link |
00:29:31.400
because it has nothing to do with Moore or with intel.
link |
00:29:34.740
But yes, the exponential growth of computing is continuing.
link |
00:29:41.600
It has never stopped.
link |
00:29:42.920
From various sources.
link |
00:29:43.960
I mean, it went through World War II,
link |
00:29:45.880
it went through global recessions.
link |
00:29:49.120
It's just continuing.
link |
00:29:53.480
And if you continue that out, along with software gains,
link |
00:29:58.100
which is a whole nother issue,
link |
00:30:01.560
and they really multiply,
link |
00:30:02.960
whatever you get from software gains,
link |
00:30:04.400
you multiply by the computer gains,
link |
00:30:07.920
you get faster and faster speed.
link |
00:30:10.940
This is actually the fastest computer models
link |
00:30:14.320
that have been created.
link |
00:30:15.840
And that actually expands roughly twice a year.
link |
00:30:19.480
Like, every six months it expands by two.
link |
00:30:22.840
So we're looking at a plot from 2010 to 2022.
link |
00:30:28.360
On the x axis is the publication date of the model,
link |
00:30:31.400
and perhaps sometimes the actual paper associated with it.
link |
00:30:34.240
And on the y axis is training, compute, and flops.
link |
00:30:40.240
And so basically this is looking at the increase
link |
00:30:43.880
in the, not transistors,
link |
00:30:46.360
but the computational power of neural networks.
link |
00:30:51.500
Yes, the computational power that created these models.
link |
00:30:55.120
And that's doubled every six months.
link |
00:30:57.560
Which is even faster than transistor division.
link |
00:31:00.360
Yeah.
link |
00:31:02.120
Now actually, since it goes faster than the amount of cost,
link |
00:31:06.880
this has actually become a greater investment
link |
00:31:10.840
to create these.
link |
00:31:12.260
But at any rate, by the time we get to 2045,
link |
00:31:16.600
we'll be able to multiply our intelligence
link |
00:31:19.120
many millions fold.
link |
00:31:21.480
And it's just very hard to imagine what that will be like.
link |
00:31:25.040
And that's the singularity where we can't even imagine.
link |
00:31:28.400
Right, that's why we call it the singularity.
link |
00:31:30.480
Because the singularity in physics,
link |
00:31:32.760
something gets sucked into its singularity
link |
00:31:35.120
and you can't tell what's going on in there
link |
00:31:37.780
because no information can get out of it.
link |
00:31:40.440
There's various problems with that,
link |
00:31:42.120
but that's the idea.
link |
00:31:44.080
It's too much beyond what we can imagine.
link |
00:31:48.960
Do you think it's possible we don't notice
link |
00:31:52.120
that what the singularity actually feels like
link |
00:31:56.360
is we just live through it
link |
00:31:59.640
with exponentially increasing cognitive capabilities
link |
00:32:05.400
and we almost, because everything's moving so quickly,
link |
00:32:09.560
aren't really able to introspect
link |
00:32:11.800
that our life has changed.
link |
00:32:13.680
Yeah, but I mean, we will have that much greater capacity
link |
00:32:17.480
to understand things, so we should be able to look back.
link |
00:32:20.760
Looking at history, understand history.
link |
00:32:23.240
But we will need people, basically like you and me,
link |
00:32:26.880
to actually think about these things.
link |
00:32:29.240
But we might be distracted
link |
00:32:30.680
by all the other sources of entertainment and fun
link |
00:32:34.160
because the exponential power of intellect is growing,
link |
00:32:39.160
but also there'll be a lot of fun.
link |
00:32:41.640
The amount of ways you can have, you know.
link |
00:32:46.160
I mean, we already have a lot of fun with computer games
link |
00:32:48.440
and so on that are really quite remarkable.
link |
00:32:51.400
What do you think about the digital world,
link |
00:32:54.760
the metaverse, virtual reality?
link |
00:32:57.640
Will that have a component in this
link |
00:32:59.120
or will most of our advancement be in physical reality?
link |
00:33:01.840
Well, that's a little bit like Second Life,
link |
00:33:04.480
although the Second Life actually didn't work very well
link |
00:33:06.880
because it couldn't actually handle too many people.
link |
00:33:09.320
And I don't think the metaverse has come to being.
link |
00:33:14.200
I think there will be something like that.
link |
00:33:16.040
It won't necessarily be from that one company.
link |
00:33:21.040
I mean, there's gonna be competitors.
link |
00:33:23.480
But yes, we're gonna live increasingly online,
link |
00:33:26.480
and particularly if our brains are online.
link |
00:33:28.960
I mean, how could we not be online?
link |
00:33:31.320
Do you think it's possible that given this merger with AI,
link |
00:33:34.680
most of our meaningful interactions
link |
00:33:39.040
will be in this virtual world most of our life?
link |
00:33:43.880
We fall in love, we make friends,
link |
00:33:46.320
we come up with ideas, we do collaborations, we have fun.
link |
00:33:49.480
I actually know somebody who's marrying somebody
link |
00:33:51.720
that they never met.
link |
00:33:54.720
I think they just met her briefly before the wedding,
link |
00:33:57.760
but she actually fell in love with this other person,
link |
00:34:01.560
never having met them.
link |
00:34:06.280
And I think the love is real, so.
link |
00:34:10.360
That's a beautiful story,
link |
00:34:11.520
but do you think that story is one that might be experienced
link |
00:34:15.080
as opposed to by hundreds of thousands of people,
link |
00:34:18.480
but instead by hundreds of millions of people?
link |
00:34:22.280
I mean, it really gives you appreciation
link |
00:34:23.880
for these virtual ways of communicating.
link |
00:34:28.520
And if anybody can do it,
link |
00:34:30.080
then it's really not such a freak story.
link |
00:34:34.720
So I think more and more people will do that.
link |
00:34:37.440
But that's turning our back
link |
00:34:38.720
on our entire history of evolution.
link |
00:34:41.920
The old days, we used to fall in love by holding hands
link |
00:34:45.520
and sitting by the fire, that kind of stuff.
link |
00:34:49.360
Here, you're playing.
link |
00:34:50.720
Actually, I have five patents on where you can hold hands,
link |
00:34:54.640
even if you're separated.
link |
00:34:57.040
Great.
link |
00:34:58.640
So the touch, the sense, it's all just senses.
link |
00:35:01.960
It's all just replicated.
link |
00:35:03.040
Yeah, I mean, touch is,
link |
00:35:04.720
it's not just that you're touching someone or not.
link |
00:35:07.160
There's a whole way of doing it, and it's very subtle.
link |
00:35:11.520
But ultimately, we can emulate all of that.
link |
00:35:17.480
Are you excited by that future?
link |
00:35:19.800
Do you worry about that future?
link |
00:35:23.480
I have certain worries about the future,
link |
00:35:25.120
but not virtual touch.
link |
00:35:27.600
Well, I agree with you.
link |
00:35:31.520
You described six stages
link |
00:35:33.480
in the evolution of information processing in the universe,
link |
00:35:36.600
as you started to describe.
link |
00:35:39.440
Can you maybe talk through some of those stages
link |
00:35:42.760
from the physics and chemistry to DNA and brains,
link |
00:35:46.320
and then to the very end,
link |
00:35:48.800
to the very beautiful end of this process?
link |
00:35:52.000
It actually gets more rapid.
link |
00:35:54.120
So physics and chemistry, that's how we started.
link |
00:35:59.720
So the very beginning of the universe.
link |
00:36:02.120
We had lots of electrons and various things traveling around.
link |
00:36:07.240
And that took actually many billions of years,
link |
00:36:11.480
kind of jumping ahead here to kind of
link |
00:36:14.840
some of the last stages where we have things
link |
00:36:16.840
like love and creativity.
link |
00:36:19.240
It's really quite remarkable that that happens.
link |
00:36:21.760
But finally, physics and chemistry created biology and DNA.
link |
00:36:29.920
And now you had actually one type of molecule
link |
00:36:33.440
that described the cutting edge of this process.
link |
00:36:38.680
And we go from physics and chemistry to biology.
link |
00:36:44.440
And finally, biology created brains.
link |
00:36:48.120
I mean, not everything that's created by biology
link |
00:36:51.440
has a brain, but eventually brains came along.
link |
00:36:56.440
And all of this is happening faster and faster.
link |
00:36:58.840
Yeah.
link |
00:37:00.320
It created increasingly complex organisms.
link |
00:37:04.480
Another key thing is actually not just brains,
link |
00:37:08.400
but our thumb.
link |
00:37:12.880
Because there's a lot of animals
link |
00:37:15.880
with brains even bigger than humans.
link |
00:37:18.080
I mean, elephants have a bigger brain.
link |
00:37:21.480
Whales have a bigger brain.
link |
00:37:24.160
But they've not created technology
link |
00:37:27.080
because they don't have a thumb.
link |
00:37:29.800
So that's one of the really key elements
link |
00:37:32.280
in the evolution of humans.
link |
00:37:34.120
This physical manipulator device
link |
00:37:37.920
that's useful for puzzle solving in the physical reality.
link |
00:37:41.320
So I could think, I could look at a tree and go,
link |
00:37:43.600
oh, I could actually trip that branch down
link |
00:37:46.240
and eliminate the leaves and carve a tip on it
link |
00:37:49.920
and I would create technology.
link |
00:37:53.000
And you can't do that if you don't have a thumb.
link |
00:37:56.640
Yeah.
link |
00:37:59.160
So thumbs then created technology
link |
00:38:04.480
and technology also had a memory.
link |
00:38:08.040
And now those memories are competing
link |
00:38:10.000
with the scale and scope of human beings.
link |
00:38:15.000
And ultimately we'll go beyond it.
link |
00:38:18.160
And then we're gonna merge human technology
link |
00:38:22.440
with human intelligence
link |
00:38:27.520
and understand how human intelligence works,
link |
00:38:30.960
which I think we already do.
link |
00:38:33.280
And we're putting that into our human technology.
link |
00:38:39.360
So create the technology inspired by our own intelligence
link |
00:38:43.120
and then that technology supersedes us
link |
00:38:45.360
in terms of its capabilities.
link |
00:38:47.200
And we ride along.
link |
00:38:48.640
Or do you ultimately see it as...
link |
00:38:50.400
And we ride along, but a lot of people don't see that.
link |
00:38:52.720
They say, well, you've got humans and you've got machines
link |
00:38:56.200
and there's no way we can ultimately compete with humans.
link |
00:39:00.280
And you can already see that.
link |
00:39:02.200
Lee Soudal, who's like the best Go player in the world,
link |
00:39:07.000
says he's not gonna play Go anymore.
link |
00:39:10.000
Because playing Go for a human,
link |
00:39:12.880
that was like the ultimate in intelligence
link |
00:39:14.920
because no one else could do that.
link |
00:39:18.200
But now a machine can actually go way beyond him.
link |
00:39:22.400
And so he says, well, there's no point playing it anymore.
link |
00:39:25.080
That may be more true for games than it is for life.
link |
00:39:30.080
I think there's a lot of benefit
link |
00:39:31.360
to working together with AI in regular life.
link |
00:39:34.440
So if you were to put a probability on it,
link |
00:39:37.920
is it more likely that we merge with AI
link |
00:39:41.240
or AI replaces us?
link |
00:39:43.560
A lot of people just think computers come along
link |
00:39:47.400
and they compete with them.
link |
00:39:48.320
We can't really compete and that's the end of it.
link |
00:39:52.320
As opposed to them increasing our abilities.
link |
00:39:57.200
And if you look at most technology,
link |
00:39:59.760
it increases our abilities.
link |
00:40:04.480
I mean, look at the history of work.
link |
00:40:07.920
Look at what people did 100 years ago.
link |
00:40:11.120
Does any of that exist anymore?
link |
00:40:13.000
People, I mean, if you were to predict
link |
00:40:16.560
that all of these jobs would go away
link |
00:40:19.440
and would be done by machines,
link |
00:40:21.000
people would say, well, there's gonna be,
link |
00:40:22.760
no one's gonna have jobs
link |
00:40:24.040
and it's gonna be massive unemployment.
link |
00:40:29.480
But I show in this book that's coming out
link |
00:40:34.120
the amount of people that are working,
link |
00:40:36.760
even as a percentage of the population has gone way up.
link |
00:40:41.640
We're looking at the x axis year from 1774 to 2024
link |
00:40:46.160
and on the y axis, personal income per capita
link |
00:40:49.520
in constant dollars and it's growing super linearly.
link |
00:40:52.720
I mean, it's 2021 constant dollars and it's gone way up.
link |
00:40:58.040
That's not what you would predict
link |
00:41:00.760
given that we would predict
link |
00:41:01.920
that all these jobs would go away.
link |
00:41:03.760
But the reason it's gone up is because
link |
00:41:07.000
we've basically enhanced our own capabilities
link |
00:41:09.880
by using these machines
link |
00:41:11.280
as opposed to them just competing with us.
link |
00:41:14.280
That's a key way in which we're gonna be able
link |
00:41:16.280
to become far smarter than we are now
link |
00:41:18.680
by increasing the number of different parameters
link |
00:41:23.200
we can consider in making a decision.
link |
00:41:26.480
I was very fortunate, I am very fortunate
link |
00:41:28.640
to be able to get a glimpse preview
link |
00:41:31.480
of your upcoming book, Singularity is Nearer.
link |
00:41:37.320
And one of the themes outside of just discussing
link |
00:41:41.920
the increasing exponential growth of technology,
link |
00:41:44.760
one of the themes is that things are getting better
link |
00:41:48.480
in all aspects of life.
link |
00:41:50.800
And you talked just about this.
link |
00:41:53.720
So one of the things you're saying is with jobs.
link |
00:41:55.640
So let me just ask about that.
link |
00:41:57.840
There is a big concern that automation,
link |
00:42:01.040
especially powerful AI, will get rid of jobs.
link |
00:42:06.400
There are people who lose jobs.
link |
00:42:07.880
And as you were saying, the sense is
link |
00:42:10.960
throughout the history of the 20th century,
link |
00:42:14.000
automation did not do that ultimately.
link |
00:42:16.640
And so the question is, will this time be different?
link |
00:42:20.600
Right, that is the question.
link |
00:42:22.560
Will this time be different?
link |
00:42:24.480
And it really has to do with how quickly
link |
00:42:26.360
we can merge with this type of intelligence.
link |
00:42:29.120
Whether Lambda or GPT3 is out there,
link |
00:42:34.920
and maybe it's overcome some of its key problems,
link |
00:42:40.200
and we really haven't enhanced human intelligence,
link |
00:42:43.480
that might be a negative scenario.
link |
00:42:49.600
But I mean, that's why we create technologies,
link |
00:42:53.160
to enhance ourselves.
link |
00:42:56.280
And I believe we will be enhanced
link |
00:42:58.800
when I'm just going to sit here with
link |
00:43:03.000
300 million modules in our neocortex.
link |
00:43:09.040
We're going to be able to go beyond that.
link |
00:43:14.000
Because that's useful, but we can multiply that by 10,
link |
00:43:19.640
100, 1,000, a million.
link |
00:43:22.240
And you might think, well, what's the point of doing that?
link |
00:43:30.240
It's like asking somebody that's never heard music,
link |
00:43:33.920
well, what's the value of music?
link |
00:43:36.600
I mean, you can't appreciate it until you've created it.
link |
00:43:41.360
There's some worry that there'll be a wealth disparity.
link |
00:43:46.880
Class or wealth disparity, only the rich people
link |
00:43:50.240
will be, basically, the rich people
link |
00:43:53.120
will first have access to this kind of thing,
link |
00:43:55.520
and then because of this kind of thing,
link |
00:43:58.000
because the ability to merge
link |
00:43:59.480
will get richer exponentially faster.
link |
00:44:02.680
And I say that's just like cell phones.
link |
00:44:06.280
I mean, there's like four billion cell phones
link |
00:44:08.080
in the world today.
link |
00:44:10.320
In fact, when cell phones first came out,
link |
00:44:13.320
you had to be fairly wealthy.
link |
00:44:14.840
They weren't very inexpensive.
link |
00:44:17.520
So you had to have some wealth in order to afford them.
link |
00:44:20.160
Yeah, there were these big, sexy phones.
link |
00:44:22.760
And they didn't work very well.
link |
00:44:24.080
They did almost nothing.
link |
00:44:26.480
So you can only afford these things if you're wealthy
link |
00:44:31.240
at a point where they really don't work very well.
link |
00:44:35.760
So achieving scale and making it inexpensive
link |
00:44:39.880
is part of making the thing work well.
link |
00:44:42.240
Exactly.
link |
00:44:43.560
So these are not totally cheap, but they're pretty cheap.
link |
00:44:46.980
I mean, you can get them for a few hundred dollars.
link |
00:44:52.140
Especially given the kind of things it provides for you.
link |
00:44:55.400
There's a lot of people in the third world
link |
00:44:57.100
that have very little, but they have a smartphone.
link |
00:45:00.380
Yeah, absolutely.
link |
00:45:01.980
And the same will be true with AI.
link |
00:45:03.820
I mean, I see homeless people have their own cell phones.
link |
00:45:07.640
Yeah, so your sense is any kind of advanced technology
link |
00:45:12.120
will take the same trajectory.
link |
00:45:13.740
Right, it ultimately becomes cheap and will be affordable.
link |
00:45:19.180
I probably would not be the first person
link |
00:45:21.060
to put something in my brain to connect to computers
link |
00:45:28.220
because I think it will have limitations.
link |
00:45:30.240
But once it's really perfected,
link |
00:45:34.140
and at that point it'll be pretty inexpensive,
link |
00:45:36.420
I think it'll be pretty affordable.
link |
00:45:39.660
So in which other ways, as you outline your book,
link |
00:45:43.080
is life getting better?
link |
00:45:44.460
Because I think...
link |
00:45:45.340
Well, I mean, I have 50 charts in there
link |
00:45:49.220
where everything is getting better.
link |
00:45:51.780
I think there's a kind of cynicism about,
link |
00:45:55.500
like even if you look at extreme poverty, for example.
link |
00:45:58.020
For example, this is actually a poll
link |
00:46:00.860
taken on extreme poverty, and people were asked,
link |
00:46:05.500
has poverty gotten better or worse?
link |
00:46:08.320
And the options are increased by 50%,
link |
00:46:11.180
increased by 25%, remain the same,
link |
00:46:13.940
decreased by 25%, decreased by 50%.
link |
00:46:16.740
If you're watching this or listening to this,
link |
00:46:18.780
try to vote for yourself.
link |
00:46:21.500
70% thought it had gotten worse,
link |
00:46:24.200
and that's the general impression.
link |
00:46:27.100
88% thought it had gotten worse or remained the same.
link |
00:46:32.500
Only 1% thought it decreased by 50%,
link |
00:46:35.700
and that is the answer.
link |
00:46:37.620
It actually decreased by 50%.
link |
00:46:39.540
So only 1% of people got the right optimistic estimate
link |
00:46:43.660
of how poverty is.
link |
00:46:45.260
Right, and this is the reality,
link |
00:46:47.520
and it's true of almost everything you look at.
link |
00:46:51.140
You don't wanna go back 100 years or 50 years.
link |
00:46:54.780
Things were quite miserable then,
link |
00:46:56.980
but we tend not to remember that.
link |
00:47:01.020
So literacy rate increasing over the past few centuries
link |
00:47:05.340
across all the different nations,
link |
00:47:07.940
nearly to 100% across many of the nations in the world.
link |
00:47:11.880
It's gone way up.
link |
00:47:12.820
Average years of education have gone way up.
link |
00:47:15.620
Life expectancy is also increasing.
link |
00:47:18.560
Life expectancy was 48 in 1900.
link |
00:47:24.380
And it's over 80 now.
link |
00:47:26.400
And it's gonna continue to go up,
link |
00:47:28.140
particularly as we get into more advanced stages
link |
00:47:30.940
of simulated biology.
link |
00:47:33.380
For life expectancy, these trends are the same
link |
00:47:35.580
for at birth, age one, age five, age 10,
link |
00:47:37.940
so it's not just the infant mortality.
link |
00:47:40.340
And I have 50 more graphs in the book
link |
00:47:42.620
about all kinds of things.
link |
00:47:46.120
Even spread of democracy,
link |
00:47:48.340
which might bring up some sort of controversial issues,
link |
00:47:52.740
it still has gone way up.
link |
00:47:55.140
Well, that one has gone way up,
link |
00:47:57.260
but that one is a bumpy road, right?
link |
00:47:59.500
Exactly, and somebody might represent democracy
link |
00:48:03.220
and go backwards, but we basically had no democracies
link |
00:48:08.460
before the creation of the United States,
link |
00:48:10.980
which was a little over two centuries ago,
link |
00:48:13.860
which in the scale of human history isn't that long.
link |
00:48:17.460
Do you think superintelligence systems will help
link |
00:48:21.460
with democracy?
link |
00:48:23.620
So what is democracy?
link |
00:48:25.020
Democracy is giving a voice to the populace
link |
00:48:29.660
and having their ideas, having their beliefs,
link |
00:48:33.700
having their views represented.
link |
00:48:38.180
Well, I hope so.
link |
00:48:41.260
I mean, we've seen social networks
link |
00:48:44.060
can spread conspiracy theories,
link |
00:48:49.180
which have been quite negative,
link |
00:48:51.340
being, for example, being against any kind of stuff
link |
00:48:55.500
that would help your health.
link |
00:48:58.340
So those kinds of ideas have,
link |
00:49:03.100
on social media, what you notice is they increase
link |
00:49:06.540
engagement, so dramatic division increases engagement.
link |
00:49:10.340
Do you worry about AI systems that will learn
link |
00:49:13.460
to maximize that division?
link |
00:49:17.020
I mean, I do have some concerns about this,
link |
00:49:22.040
and I have a chapter in the book about the perils
link |
00:49:25.740
of advanced AI, spreading misinformation
link |
00:49:32.380
on social networks is one of them,
link |
00:49:34.080
but there are many others.
link |
00:49:36.780
What's the one that worries you the most
link |
00:49:40.640
that we should think about to try to avoid?
link |
00:49:47.260
Well, it's hard to choose.
link |
00:49:50.820
We do have the nuclear power that evolved
link |
00:49:55.340
when I was a child, I remember,
link |
00:49:57.660
and we would actually do these drills against a nuclear war.
link |
00:50:03.580
We'd get under our desks and put our hands behind our heads
link |
00:50:07.620
to protect us from a nuclear war.
link |
00:50:11.140
Seems to work, we're still around, so.
link |
00:50:15.540
You're protected.
link |
00:50:17.060
But that's still a concern.
link |
00:50:20.080
And there are key dangerous situations
link |
00:50:22.860
that can take place in biology.
link |
00:50:27.140
Someone could create a virus that's very,
link |
00:50:33.340
I mean, we have viruses that are hard to spread,
link |
00:50:40.560
and they can be very dangerous,
link |
00:50:42.800
and we have viruses that are easy to spread,
link |
00:50:46.160
but they're not so dangerous.
link |
00:50:47.800
Somebody could create something
link |
00:50:51.600
that would be very easy to spread and very dangerous,
link |
00:50:55.580
and be very hard to stop.
link |
00:50:58.960
It could be something that would spread
link |
00:51:02.040
without people noticing, because people could get it,
link |
00:51:04.640
they'd have no symptoms, and then everybody would get it,
link |
00:51:08.320
and then symptoms would occur maybe a month later.
link |
00:51:11.800
So I mean, and that actually doesn't occur normally,
link |
00:51:18.760
because if we were to have a problem with that,
link |
00:51:24.680
we wouldn't exist.
link |
00:51:26.920
So the fact that humans exist means that we don't have
link |
00:51:30.720
viruses that can spread easily and kill us,
link |
00:51:35.040
because otherwise we wouldn't exist.
link |
00:51:37.540
Yeah, viruses don't wanna do that.
link |
00:51:39.080
They want to spread and keep the host alive somewhat.
link |
00:51:44.080
So you can describe various dangers with biology.
link |
00:51:48.620
Also nanotechnology, which we actually haven't experienced
link |
00:51:53.520
yet, but there are people that are creating nanotechnology,
link |
00:51:56.040
and I describe that in the book.
link |
00:51:57.960
Now you're excited by the possibilities of nanotechnology,
link |
00:52:00.880
of nanobots, of being able to do things inside our body,
link |
00:52:04.920
inside our mind, that's going to help.
link |
00:52:07.520
What's exciting, what's terrifying about nanobots?
link |
00:52:10.880
What's exciting is that that's a way to communicate
link |
00:52:13.920
with our neocortex, because each neocortex is pretty small
link |
00:52:19.000
and you need a small entity that can actually get in there
link |
00:52:22.360
and establish a communication channel.
link |
00:52:25.440
And that's gonna really be necessary to connect our brains
link |
00:52:30.320
to AI within ourselves, because otherwise it would be hard
link |
00:52:35.320
for us to compete with it.
link |
00:52:38.720
In a high bandwidth way.
link |
00:52:40.240
Yeah, yeah.
link |
00:52:41.760
And that's key, actually, because a lot of the things
link |
00:52:45.720
like Neuralink are really not high bandwidth yet.
link |
00:52:49.880
So nanobots is the way you achieve high bandwidth.
link |
00:52:52.680
How much intelligence would those nanobots have?
link |
00:52:55.880
Yeah, they don't need a lot, just enough to basically
link |
00:53:00.320
establish a communication channel to one nanobot.
link |
00:53:04.400
So it's primarily about communication.
link |
00:53:06.720
Yeah.
link |
00:53:07.560
Between external computing devices
link |
00:53:09.880
and our biological thinking machine.
link |
00:53:15.020
What worries you about nanobots?
link |
00:53:17.040
Is it similar to with the viruses?
link |
00:53:19.840
Well, I mean, it's the great goo challenge.
link |
00:53:22.720
Yes.
link |
00:53:24.920
If you had a nanobot that wanted to create
link |
00:53:29.920
any kind of entity and repeat itself,
link |
00:53:37.520
and was able to operate in a natural environment,
link |
00:53:41.520
it could turn everything into that entity
link |
00:53:45.240
and basically destroy all biological life.
link |
00:53:52.000
So you mentioned nuclear weapons.
link |
00:53:54.600
Yeah.
link |
00:53:55.440
I'd love to hear your opinion about the 21st century
link |
00:54:01.840
and whether you think we might destroy ourselves.
link |
00:54:05.320
And maybe your opinion, if it has changed
link |
00:54:08.840
by looking at what's going on in Ukraine,
link |
00:54:11.760
that we could have a hot war with nuclear powers involved
link |
00:54:18.980
and the tensions building and the seeming forgetting
link |
00:54:23.320
of how terrifying and destructive nuclear weapons are.
link |
00:54:29.340
Do you think humans might destroy ourselves
link |
00:54:32.940
in the 21st century, and if we do, how?
link |
00:54:36.220
And how do we avoid it?
link |
00:54:38.540
I don't think that's gonna happen
link |
00:54:41.100
despite the terrors of that war.
link |
00:54:45.180
It is a possibility, but I mean, I don't.
link |
00:54:50.420
It's unlikely in your mind.
link |
00:54:52.700
Yeah, even with the tensions we've had
link |
00:54:55.380
with this one nuclear power plant that's been taken over,
link |
00:55:02.340
it's very tense, but I don't actually see a lot of people
link |
00:55:07.580
worrying that that's gonna happen.
link |
00:55:10.220
I think we'll avoid that.
link |
00:55:11.900
We had two nuclear bombs go off in 45,
link |
00:55:15.940
so now we're 77 years later.
link |
00:55:20.860
Yeah, we're doing pretty good.
link |
00:55:22.400
We've never had another one go off through anger.
link |
00:55:27.100
People forget the lessons of history.
link |
00:55:31.020
Well, yeah, I mean, I am worried about it.
link |
00:55:33.540
I mean, that is definitely a challenge.
link |
00:55:37.460
But you believe that we'll make it out
link |
00:55:40.620
and ultimately superintelligent AI will help us make it out
link |
00:55:44.600
as opposed to destroy us.
link |
00:55:47.680
I think so, but we do have to be mindful of these dangers.
link |
00:55:52.420
And there are other dangers besides nuclear weapons, so.
link |
00:55:56.340
So to get back to merging with AI,
link |
00:56:01.060
will we be able to upload our mind in a computer
link |
00:56:06.020
in a way where we might even transcend
link |
00:56:09.380
the constraints of our bodies?
link |
00:56:11.620
So copy our mind into a computer and leave the body behind?
link |
00:56:15.300
Let me describe one thing I've already done with my father.
link |
00:56:21.060
That's a great story.
link |
00:56:23.700
So we created a technology, this is public,
link |
00:56:26.660
came out, I think, six years ago,
link |
00:56:30.140
where you could ask any question
link |
00:56:33.740
and the release product,
link |
00:56:35.220
which I think is still on the market,
link |
00:56:37.620
it would read 200,000 books.
link |
00:56:40.900
And then find the one sentence in 200,000 books
link |
00:56:46.140
that best answered your question.
link |
00:56:49.860
And it's actually quite interesting.
link |
00:56:51.180
You can ask all kinds of questions
link |
00:56:52.740
and you get the best answer in 200,000 books.
link |
00:56:57.580
But I was also able to take it
link |
00:56:59.940
and not go through 200,000 books,
link |
00:57:03.180
but go through a book that I put together,
link |
00:57:07.060
which is basically everything my father had written.
link |
00:57:10.980
So everything he had written, I had gathered,
link |
00:57:14.660
and we created a book,
link |
00:57:17.100
everything that Frederick Herzog had written.
link |
00:57:20.220
Now, I didn't think this actually would work that well
link |
00:57:23.380
because stuff he had written was stuff about how to lay out.
link |
00:57:30.900
I mean, he directed choral groups
link |
00:57:35.900
and music groups,
link |
00:57:39.220
and he would be laying out how the people should,
link |
00:57:44.180
where they should sit and how to fund this
link |
00:57:49.660
and all kinds of things
link |
00:57:52.220
that really didn't seem that interesting.
link |
00:57:57.620
And yet, when you ask a question,
link |
00:57:59.900
it would go through it
link |
00:58:00.820
and it would actually give you a very good answer.
link |
00:58:04.760
So I said, well, who's the most interesting composer?
link |
00:58:07.860
And he said, well, definitely Brahms.
link |
00:58:09.500
And he would go on about how Brahms was fabulous
link |
00:58:13.220
and talk about the importance of music education.
link |
00:58:18.020
So you could have essentially a question and answer,
link |
00:58:21.140
a conversation with him.
link |
00:58:21.980
You could have a conversation with him,
link |
00:58:23.020
which was actually more interesting than talking to him
link |
00:58:25.940
because if you talked to him,
link |
00:58:27.020
he'd be concerned about how they're gonna lay out
link |
00:58:30.080
this property to give a choral group.
link |
00:58:34.220
He'd be concerned about the day to day
link |
00:58:36.060
versus the big questions.
link |
00:58:37.300
Exactly, yeah.
link |
00:58:39.060
And you did ask about the meaning of life
link |
00:58:41.620
and he answered, love.
link |
00:58:43.260
Yeah.
link |
00:58:46.460
Do you miss him?
link |
00:58:49.180
Yes, I do.
link |
00:58:52.940
Yeah, you get used to missing somebody after 52 years,
link |
00:58:58.540
and I didn't really have intelligent conversations with him
link |
00:59:02.700
until later in life.
link |
00:59:06.940
In the last few years, he was sick,
link |
00:59:08.780
which meant he was home a lot
link |
00:59:10.140
and I was actually able to talk to him
link |
00:59:11.900
about different things like music and other things.
link |
00:59:15.780
And so I miss that very much.
link |
00:59:19.820
What did you learn about life from your father?
link |
00:59:25.180
What part of him is with you now?
link |
00:59:29.020
He was devoted to music.
link |
00:59:31.580
And when he would create something to music,
link |
00:59:33.860
it put him in a different world.
link |
00:59:37.540
Otherwise, he was very shy.
link |
00:59:42.520
And if people got together,
link |
00:59:43.780
he tended not to interact with people
link |
00:59:47.020
just because of his shyness.
link |
00:59:49.780
But when he created music, he was like a different person.
link |
00:59:55.340
Do you have that in you?
link |
00:59:56.540
That kind of light that shines?
link |
00:59:59.840
I mean, I got involved with technology at like age five.
link |
01:00:06.620
And you fell in love with it
link |
01:00:07.620
in the same way he did with music?
link |
01:00:09.380
Yeah, yeah.
link |
01:00:11.300
I remember this actually happened with my grandmother.
link |
01:00:16.940
She had a manual typewriter
link |
01:00:20.060
and she wrote a book, One Life Is Not Enough,
link |
01:00:23.060
which actually a good title for a book I might write,
link |
01:00:26.240
but it was about a school she had created.
link |
01:00:30.260
Well, actually her mother created it.
link |
01:00:33.800
So my mother's mother's mother created the school in 1868.
link |
01:00:38.300
And it was the first school in Europe
link |
01:00:40.620
that provided higher education for girls.
link |
01:00:42.660
It went through 14th grade.
link |
01:00:45.620
If you were a girl and you were lucky enough
link |
01:00:48.260
to get an education at all,
link |
01:00:50.700
it would go through like ninth grade.
link |
01:00:52.940
And many people didn't have any education as a girl.
link |
01:00:56.660
This went through 14th grade.
link |
01:01:00.620
Her mother created it, she took it over,
link |
01:01:04.060
and the book was about the history of the school
link |
01:01:09.420
and her involvement with it.
link |
01:01:12.980
When she presented it to me,
link |
01:01:14.020
I was not so interested in the story of the school,
link |
01:01:19.020
but I was totally amazed with this manual typewriter.
link |
01:01:25.400
I mean, here is something you could put a blank piece
link |
01:01:27.860
of paper into and you could turn it into something
link |
01:01:31.080
that looked like it came from a book.
link |
01:01:33.780
And you can actually type on it
link |
01:01:34.620
and it looked like it came from a book.
link |
01:01:36.440
It was just amazing to me.
link |
01:01:39.120
And I could see actually how it worked.
link |
01:01:42.720
And I was also interested in magic.
link |
01:01:44.840
But in magic, if somebody actually knows how it works,
link |
01:01:50.420
the magic goes away.
link |
01:01:52.540
The magic doesn't stay there
link |
01:01:53.780
if you actually understand how it works.
link |
01:01:56.600
But here was technology.
link |
01:01:57.820
I didn't have that word when I was five or six.
link |
01:02:01.020
And the magic was still there for you?
link |
01:02:02.660
The magic was still there, even if you knew how it worked.
link |
01:02:06.900
So I became totally interested in this
link |
01:02:08.780
and then went around, collected little pieces
link |
01:02:12.580
of mechanical objects from bicycles, from broken radios.
link |
01:02:17.580
I would go through the neighborhood.
link |
01:02:20.500
This was an era where you would allow five or six year olds
link |
01:02:23.700
to run through the neighborhood and do this.
link |
01:02:26.340
We don't do that anymore.
link |
01:02:27.740
But I didn't know how to put them together.
link |
01:02:30.700
I said, if I could just figure out
link |
01:02:32.220
how to put these things together, I could solve any problem.
link |
01:02:37.340
And I actually remember talking to these very old girls.
link |
01:02:41.660
I think they were 10.
link |
01:02:45.340
And telling them, if I could just figure this out,
link |
01:02:48.340
we could fly, we could do anything.
link |
01:02:50.120
And they said, well, you have quite an imagination.
link |
01:02:56.220
And then when I was in third grade,
link |
01:03:00.780
so I was like eight,
link |
01:03:02.860
created like a virtual reality theater
link |
01:03:05.900
where people could come on stage
link |
01:03:07.780
and they could move their arms.
link |
01:03:09.900
And all of it was controlled through one control box.
link |
01:03:13.540
It was all done with mechanical technology.
link |
01:03:16.980
And it was a big hit in my third grade class.
link |
01:03:21.100
And then I went on to do things
link |
01:03:22.980
in junior high school science fairs
link |
01:03:24.900
and high school science fairs.
link |
01:03:27.660
I won the Westinghouse Science Talent Search.
link |
01:03:30.720
So I mean, I became committed to technology
link |
01:03:33.940
when I was five or six years old.
link |
01:03:37.460
You've talked about how you use lucid dreaming to think,
link |
01:03:43.100
to come up with ideas as a source of creativity.
link |
01:03:45.900
Because you maybe talk through that,
link |
01:03:49.360
maybe the process of how to,
link |
01:03:52.020
you've invented a lot of things.
link |
01:03:54.060
You've came up and thought through
link |
01:03:55.620
some very interesting ideas.
link |
01:03:58.100
What advice would you give,
link |
01:03:59.520
or can you speak to the process of thinking,
link |
01:04:03.420
of how to think, how to think creatively?
link |
01:04:07.100
Well, I mean, sometimes I will think through in a dream
link |
01:04:10.460
and try to interpret that.
link |
01:04:12.340
But I think the key issue that I would tell younger people
link |
01:04:22.220
is to put yourself in the position
link |
01:04:25.080
that what you're trying to create already exists.
link |
01:04:30.660
And then you're explaining, like...
link |
01:04:34.980
How it works.
link |
01:04:35.820
Exactly.
link |
01:04:38.220
That's really interesting.
link |
01:04:39.220
You paint a world that you would like to exist,
link |
01:04:42.780
you think it exists, and reverse engineer that.
link |
01:04:45.940
And then you actually imagine you're giving a speech
link |
01:04:47.900
about how you created this.
link |
01:04:50.140
Well, you'd have to then work backwards
link |
01:04:51.780
as to how you would create it in order to make it work.
link |
01:04:57.320
That's brilliant.
link |
01:04:58.160
And that requires some imagination too,
link |
01:05:01.420
some first principles thinking.
link |
01:05:03.140
You have to visualize that world.
link |
01:05:06.040
That's really interesting.
link |
01:05:07.720
And generally, when I talk about things
link |
01:05:10.600
we're trying to invent, I would use the present tense
link |
01:05:13.160
as if it already exists.
link |
01:05:15.880
Not just to give myself that confidence,
link |
01:05:18.280
but everybody else who's working on it.
link |
01:05:21.840
We just have to kind of do all the steps
link |
01:05:26.640
in order to make it actual.
link |
01:05:31.040
How much of a good idea is about timing?
link |
01:05:35.400
How much is it about your genius
link |
01:05:37.040
versus that its time has come?
link |
01:05:41.620
Timing's very important.
link |
01:05:42.920
I mean, that's really why I got into futurism.
link |
01:05:46.200
I didn't, I wasn't inherently a futurist.
link |
01:05:50.500
That was not really my goal.
link |
01:05:54.320
It's really to figure out when things are feasible.
link |
01:05:57.400
We see that now with large scale models.
link |
01:06:01.680
The very large scale models like GPT3,
link |
01:06:06.400
it started two years ago.
link |
01:06:09.600
Four years ago, it wasn't feasible.
link |
01:06:11.160
In fact, they did create GPT2, which didn't work.
link |
01:06:18.800
So it required a certain amount of timing
link |
01:06:22.360
having to do with this exponential growth
link |
01:06:24.200
of computing power.
link |
01:06:27.400
So futurism in some sense is a study of timing,
link |
01:06:31.240
trying to understand how the world will evolve
link |
01:06:34.400
and when will the capacity for certain ideas emerge.
link |
01:06:38.320
And that's become a thing in itself
link |
01:06:40.040
and to try to time things in the future.
link |
01:06:43.960
But really its original purpose was to time my products.
link |
01:06:48.960
I mean, I did OCR in the 1970s
link |
01:06:55.480
because OCR doesn't require a lot of computation.
link |
01:07:01.440
Optical character recognition.
link |
01:07:02.760
Yeah, so we were able to do that in the 70s
link |
01:07:06.560
and I waited till the 80s to address speech recognition
link |
01:07:11.000
since that requires more computation.
link |
01:07:14.480
So you were thinking through timing
link |
01:07:16.000
when you're developing those things.
link |
01:07:17.480
Yeah.
link |
01:07:18.320
Time come.
link |
01:07:19.880
Yeah.
link |
01:07:21.400
And that's how you've developed that brain power
link |
01:07:24.360
to start to think in a futurist sense
link |
01:07:26.720
when how will the world look like in 2045
link |
01:07:31.040
and work backwards and how it gets there.
link |
01:07:33.640
But that has to become a thing in itself
link |
01:07:35.360
because looking at what things will be like in the future
link |
01:07:40.360
and the future reflects such dramatic changes in how humans will live
link |
01:07:48.680
that was worth communicating also.
link |
01:07:51.240
So you developed that muscle of predicting the future
link |
01:07:56.360
and then applied broadly
link |
01:07:58.280
and started to discuss how it changes the world of technology,
link |
01:08:02.280
how it changes the world of human life on earth.
link |
01:08:06.800
In Danielle, one of your books,
link |
01:08:09.000
you write about someone who has the courage
link |
01:08:11.600
to question assumptions that limit human imagination
link |
01:08:15.040
to solve problems.
link |
01:08:16.640
And you also give advice
link |
01:08:18.560
on how each of us can have this kind of courage.
link |
01:08:22.760
Well, it's good that you picked that quote
link |
01:08:24.520
because I think that does symbolize what Danielle is about.
link |
01:08:27.480
Courage.
link |
01:08:28.760
So how can each of us have that courage
link |
01:08:30.760
to question assumptions?
link |
01:08:33.760
I mean, we see that when people can go beyond
link |
01:08:38.600
the current realm and create something that's new.
link |
01:08:43.880
I mean, take Uber, for example.
link |
01:08:45.520
Before that existed, you never thought
link |
01:08:48.120
that that would be feasible
link |
01:08:49.960
and it did require changes in the way people work.
link |
01:08:54.520
Is there practical advice as you give in the book
link |
01:08:57.840
about what each of us can do to be a Danielle?
link |
01:09:04.880
Well, she looks at the situation
link |
01:09:06.880
and tries to imagine how she can overcome various obstacles
link |
01:09:15.840
and then she goes for it.
link |
01:09:17.960
And she's a very good communicator
link |
01:09:19.680
so she can communicate these ideas to other people.
link |
01:09:25.080
And there's practical advice of learning to program
link |
01:09:27.640
and recording your life and things of this nature.
link |
01:09:32.000
Become a physicist.
link |
01:09:33.240
So you list a bunch of different suggestions
link |
01:09:36.880
of how to throw yourself into this world.
link |
01:09:39.120
Yeah, I mean, it's kind of an idea
link |
01:09:42.200
how young people can actually change the world
link |
01:09:46.160
by learning all of these different skills.
link |
01:09:52.440
And at the core of that is the belief
link |
01:09:54.760
that you can change the world.
link |
01:09:57.840
That your mind, your body can change the world.
link |
01:10:00.480
Yeah, that's right.
link |
01:10:02.760
And not letting anyone else tell you otherwise.
link |
01:10:06.640
That's really good, exactly.
link |
01:10:08.920
When we upload the story you told about your dad
link |
01:10:13.440
and having a conversation with him,
link |
01:10:16.160
we're talking about uploading your mind to the computer.
link |
01:10:21.720
Do you think we'll have a future
link |
01:10:23.160
with something you call afterlife?
link |
01:10:25.640
We'll have avatars that mimic increasingly better and better
link |
01:10:29.840
our behavior, our appearance, all that kind of stuff.
link |
01:10:33.520
Even those that are perhaps no longer with us.
link |
01:10:36.800
Yes, I mean, we need some information about them.
link |
01:10:42.840
I mean, think about my father.
link |
01:10:45.640
I have what he wrote.
link |
01:10:48.080
Now, he didn't have a word processor,
link |
01:10:50.480
so he didn't actually write that much.
link |
01:10:53.680
And our memories of him aren't perfect.
link |
01:10:56.000
So how do you even know if you've created something
link |
01:10:59.840
that's satisfactory?
link |
01:11:00.840
Now, you could do a Frederick Kurzweil Turing test.
link |
01:11:04.920
It seems like Frederick Kurzweil to me.
link |
01:11:07.920
But the people who remember him, like me,
link |
01:11:11.240
don't have a perfect memory.
link |
01:11:14.400
Is there such a thing as a perfect memory?
link |
01:11:16.320
Maybe the whole point is for him to make you feel
link |
01:11:24.760
a certain way.
link |
01:11:25.600
Yeah, well, I think that would be the goal.
link |
01:11:28.400
And that's the connection we have with loved ones.
link |
01:11:30.360
It's not really based on very strict definition of truth.
link |
01:11:35.120
It's more about the experiences we share.
link |
01:11:37.560
And they get morphed through memory.
link |
01:11:39.880
But ultimately, they make us smile.
link |
01:11:41.800
I think we definitely can do that.
link |
01:11:44.440
And that would be very worthwhile.
link |
01:11:46.800
So do you think we'll have a world of replicants?
link |
01:11:49.960
Of copies?
link |
01:11:51.280
There'll be a bunch of Ray Kurzweils.
link |
01:11:53.800
Like, I could hang out with one.
link |
01:11:55.320
I can download it for five bucks
link |
01:11:58.200
and have a best friend, Ray.
link |
01:12:01.680
And you, the original copy, wouldn't even know about it.
link |
01:12:07.160
Is that, do you think that world is,
link |
01:12:11.440
first of all, do you think that world is feasible?
link |
01:12:13.360
And do you think there's ethical challenges there?
link |
01:12:16.320
Like, how would you feel about me hanging out
link |
01:12:18.080
with Ray Kurzweil and you not knowing about it?
link |
01:12:20.480
It doesn't strike me as a problem.
link |
01:12:28.080
Which you, the original?
link |
01:12:30.240
Would you strike, would that cause a problem for you?
link |
01:12:34.240
No, I would really very much enjoy it.
link |
01:12:37.480
No, not just hang out with me,
link |
01:12:38.760
but if somebody hang out with you, a replicant of you.
link |
01:12:43.840
Well, I think I would start, it sounds exciting,
link |
01:12:46.800
but then what if they start doing better than me
link |
01:12:51.560
and take over my friend group?
link |
01:12:55.000
And then, because they may be an imperfect copy
link |
01:13:02.280
or there may be more social, all these kinds of things,
link |
01:13:05.320
and then I become like the old version
link |
01:13:07.640
that's not nearly as exciting.
link |
01:13:10.240
Maybe they're a copy of the best version of me
link |
01:13:12.360
on a good day.
link |
01:13:13.200
Yeah, but if you hang out with a replicant of me
link |
01:13:18.020
and that turned out to be successful,
link |
01:13:20.200
I'd feel proud of that person because it was based on me.
link |
01:13:24.960
So it's, but it is a kind of death of this version of you.
link |
01:13:32.420
Well, not necessarily.
link |
01:13:33.960
I mean, you can still be alive, right?
link |
01:13:36.360
But, and you would be proud, okay,
link |
01:13:38.560
so it's like having kids and you're proud
link |
01:13:40.280
that they've done even more than you were able to do.
link |
01:13:42.720
Yeah, exactly.
link |
01:13:48.280
It does bring up new issues,
link |
01:13:50.040
but it seems like an opportunity.
link |
01:13:55.120
Well, that replicant should probably have the same rights
link |
01:13:57.840
as you do.
link |
01:13:59.680
Well, that gets into a whole issue
link |
01:14:05.420
because when a replicant occurs,
link |
01:14:07.400
they're not necessarily gonna have your rights.
link |
01:14:10.320
And if a replicant occurs,
link |
01:14:11.680
if it's somebody who's already dead,
link |
01:14:14.680
do they have all the obligations
link |
01:14:17.880
and that the original person had?
link |
01:14:21.160
Do they have all the agreements that they had?
link |
01:14:25.840
I think you're gonna have to have laws that say yes.
link |
01:14:30.200
There has to be, if you wanna create a replicant,
link |
01:14:33.260
they have to have all the same rights as human rights.
link |
01:14:35.720
Well, you don't know.
link |
01:14:37.080
Someone can create a replicant and say,
link |
01:14:38.400
well, it's a replicant,
link |
01:14:39.240
but I didn't bother getting their rights.
link |
01:14:40.920
And so.
link |
01:14:41.760
Yeah, but that would be illegal, I mean.
link |
01:14:43.720
Like if you do that, you have to do that in the black market.
link |
01:14:47.600
If you wanna get an official replicant.
link |
01:14:49.520
Okay, it's not so easy.
link |
01:14:51.000
It's supposed to create multiple replicants.
link |
01:14:55.360
The original rights,
link |
01:14:59.840
maybe for one person and not for a whole group of people.
link |
01:15:04.640
Sure.
link |
01:15:08.520
So there has to be at least one.
link |
01:15:10.600
And then all the other ones kinda share the rights.
link |
01:15:14.260
Yeah, I just don't think that,
link |
01:15:16.480
that's very difficult to conceive for us humans,
link |
01:15:18.680
the idea that this country.
link |
01:15:20.760
You create a replicant that has certain,
link |
01:15:24.600
I mean, I've talked to people about this,
link |
01:15:26.800
including my wife, who would like to get back her father.
link |
01:15:32.640
And she doesn't worry about who has rights to what.
link |
01:15:38.280
She would have somebody that she could visit with
link |
01:15:40.440
and might give her some satisfaction.
link |
01:15:44.300
And she wouldn't care about any of these other rights.
link |
01:15:49.240
What does your wife think about multiple rake or as wells?
link |
01:15:53.560
Have you had that discussion?
link |
01:15:54.400
I haven't addressed that with her.
link |
01:15:58.200
I think ultimately that's an important question,
link |
01:16:00.640
loved ones, how they feel about.
link |
01:16:03.560
There's something about love.
link |
01:16:05.040
Well, that's the key thing, right?
link |
01:16:06.400
If the loved one's rejected,
link |
01:16:07.960
it's not gonna work very well, so.
link |
01:16:12.320
So the loved ones really are the key determinant,
link |
01:16:15.960
whether or not this works or not.
link |
01:16:19.760
But there's also ethical rules.
link |
01:16:22.840
We have to contend with the idea,
link |
01:16:24.200
and we have to contend with that idea with AI.
link |
01:16:27.920
But what's gonna motivate it is,
link |
01:16:30.320
I mean, I talk to people who really miss people who are gone
link |
01:16:34.680
and they would love to get something back,
link |
01:16:37.760
even if it isn't perfect.
link |
01:16:40.840
And that's what's gonna motivate this.
link |
01:16:47.120
And that person lives on in some form.
link |
01:16:51.200
And the more data we have,
link |
01:16:52.880
the more we're able to reconstruct that person
link |
01:16:56.080
and allow them to live on.
link |
01:16:59.360
And eventually as we go forward,
link |
01:17:01.440
we're gonna have more and more of this data
link |
01:17:03.160
because we're gonna have none of us
link |
01:17:06.360
that are inside our neocortex
link |
01:17:08.360
and we're gonna collect a lot of data.
link |
01:17:11.120
In fact, anything that's data is always collected.
link |
01:17:15.800
There is something a little bit sad,
link |
01:17:18.680
which is becoming, or maybe it's hopeful,
link |
01:17:23.200
which is more and more common these days,
link |
01:17:26.800
which when a person passes away,
link |
01:17:28.360
you have their Twitter account,
link |
01:17:31.080
and you have the last tweet they tweeted,
link |
01:17:34.080
like something they needed.
link |
01:17:35.040
And you can recreate them now
link |
01:17:36.520
with large language models and so on.
link |
01:17:38.360
I mean, you can create somebody that's just like them
link |
01:17:40.880
and can actually continue to communicate.
link |
01:17:45.040
I think that's really exciting
link |
01:17:46.440
because I think in some sense,
link |
01:17:49.360
like if I were to die today,
link |
01:17:51.760
in some sense I would continue on if I continued tweeting.
link |
01:17:56.120
I tweet, therefore I am.
link |
01:17:58.880
Yeah, well, I mean, that's one of the advantages
link |
01:18:02.040
of a replicant, they can recreate the communications
link |
01:18:06.600
of that person.
link |
01:18:10.320
Do you hope, do you think, do you hope
link |
01:18:14.400
humans will become a multi planetary species?
link |
01:18:17.440
You've talked about the phases, the six epochs,
link |
01:18:20.040
and one of them is reaching out into the stars in part.
link |
01:18:23.600
Yes, but the kind of attempts we're making now
link |
01:18:28.240
to go to other planetary objects
link |
01:18:34.400
doesn't excite me that much
link |
01:18:36.560
because it's not really advancing anything.
link |
01:18:38.840
It's not efficient enough?
link |
01:18:41.160
Yeah, and we're also putting out other human beings,
link |
01:18:48.120
which is a very inefficient way
link |
01:18:50.440
to explore these other objects.
link |
01:18:52.600
What I'm really talking about in the sixth epoch,
link |
01:18:57.800
the universe wakes up.
link |
01:19:00.240
It's where we can spread our super intelligence
link |
01:19:03.080
throughout the universe.
link |
01:19:05.400
And that doesn't mean sending a very soft,
link |
01:19:08.120
squishy creatures like humans.
link |
01:19:10.200
Yeah, the universe wakes up.
link |
01:19:13.840
I mean, we would send intelligence masses of nanobots
link |
01:19:18.840
which can then go out and colonize
link |
01:19:24.880
these other parts of the universe.
link |
01:19:29.000
Do you think there's intelligent alien civilizations
link |
01:19:31.600
out there that our bots might meet?
link |
01:19:35.240
My hunch is no.
link |
01:19:38.760
Most people say yes, absolutely.
link |
01:19:40.720
I mean, and the universe is too big.
link |
01:19:43.480
And they'll cite the Drake equation.
link |
01:19:46.160
And I think in Singularity is Near,
link |
01:19:52.560
I have two analyses of the Drake equation,
link |
01:19:56.200
both with very reasonable assumptions.
link |
01:20:00.000
And one gives you thousands of advanced civilizations
link |
01:20:04.960
in each galaxy.
link |
01:20:07.440
And another one gives you one civilization.
link |
01:20:11.840
And we know of one.
link |
01:20:13.680
A lot of the analyses are forgetting
link |
01:20:16.600
the exponential growth of computation.
link |
01:20:21.200
Because we've gone from where the fastest way
link |
01:20:24.840
I could send a message to somebody was with a pony,
link |
01:20:30.160
which was what, like a century and a half ago?
link |
01:20:34.920
To the advanced civilization we have today.
link |
01:20:37.880
And if you accept what I've said,
link |
01:20:40.880
go forward a few decades,
link |
01:20:42.720
you can have absolutely fantastic amount of civilization
link |
01:20:46.800
compared to a pony, and that's in a couple hundred years.
link |
01:20:50.400
Yeah, the speed and the scale of information transfer
link |
01:20:53.320
is growing exponentially in a blink of an eye.
link |
01:20:58.720
Now think about these other civilizations.
link |
01:21:01.680
They're gonna be spread out at cosmic times.
link |
01:21:06.320
So if something is like ahead of us or behind us,
link |
01:21:10.160
it could be ahead of us or behind us by maybe millions
link |
01:21:14.280
of years, which isn't that much.
link |
01:21:16.440
I mean, the world is billions of years old,
link |
01:21:21.520
14 billion or something.
link |
01:21:23.960
So even a thousand years, if two or 300 years is enough
link |
01:21:29.760
to go from a pony to fantastic amount of civilization,
link |
01:21:33.920
we would see that.
link |
01:21:35.920
So of other civilizations that have occurred,
link |
01:21:39.720
okay, some might be behind us, but some might be ahead of us.
link |
01:21:43.960
If they're ahead of us, they're ahead of us
link |
01:21:45.800
by thousands, millions of years,
link |
01:21:49.560
and they would be so far beyond us,
link |
01:21:51.760
they would be doing galaxy wide engineering.
link |
01:21:56.200
But we don't see anything doing galaxy wide engineering.
link |
01:22:00.080
So either they don't exist, or this very universe
link |
01:22:05.120
is a construction of an alien species.
link |
01:22:08.340
We're living inside a video game.
link |
01:22:11.760
Well, that's another explanation that yes,
link |
01:22:14.840
you've got some teenage kids in another civilization.
link |
01:22:19.140
Do you find compelling the simulation hypothesis
link |
01:22:22.280
as a thought experiment that we're living in a simulation?
link |
01:22:25.640
The universe is computational.
link |
01:22:29.400
So we are an example in a computational world.
link |
01:22:34.400
Therefore, it is a simulation.
link |
01:22:39.120
It doesn't necessarily mean an experiment
link |
01:22:41.040
by some high school kid in another world,
link |
01:22:44.820
but it nonetheless is taking place
link |
01:22:47.800
in a computational world.
link |
01:22:50.120
And everything that's going on
link |
01:22:51.600
is basically a form of computation.
link |
01:22:58.080
So you really have to define what you mean
link |
01:23:00.640
by this whole world being a simulation.
link |
01:23:06.360
Well, then it's the teenager that makes the video game.
link |
01:23:12.400
Us humans with our current limited cognitive capability
link |
01:23:16.720
have strived to understand ourselves
link |
01:23:20.560
and we have created religions.
link |
01:23:23.840
We think of God.
link |
01:23:25.160
Whatever that is, do you think God exists?
link |
01:23:32.240
And if so, who is God?
link |
01:23:35.440
I alluded to this before.
link |
01:23:37.720
We started out with lots of particles going around
link |
01:23:42.840
and there's nothing that represents love and creativity.
link |
01:23:53.160
And somehow we've gotten into a world
link |
01:23:55.000
where love actually exists
link |
01:23:57.920
and that has to do actually with consciousness
link |
01:23:59.960
because you can't have love without consciousness.
link |
01:24:03.120
So to me, that's God, the fact that we have something
link |
01:24:06.720
where love, where you can be devoted to someone else
link |
01:24:11.200
and really feel the love, that's God.
link |
01:24:19.080
And if you look at the Old Testament,
link |
01:24:21.040
it was actually created by several different
link |
01:24:26.680
ravenants in there.
link |
01:24:29.200
And I think they've identified three of them.
link |
01:24:34.080
One of them dealt with God as a person
link |
01:24:39.400
that you can make deals with and he gets angry
link |
01:24:42.440
and he wrecks vengeance on various people.
link |
01:24:48.320
But two of them actually talk about God
link |
01:24:50.440
as a symbol of love and peace and harmony and so forth.
link |
01:24:58.280
That's how they describe God.
link |
01:25:01.360
So that's my view of God, not as a person in the sky
link |
01:25:06.120
that you can make deals with.
link |
01:25:09.120
It's whatever the magic that goes from basic elements
link |
01:25:13.200
to things like consciousness and love.
link |
01:25:15.960
Do you think one of the things I find
link |
01:25:19.200
extremely beautiful and powerful is cellular automata,
link |
01:25:22.240
which you also touch on?
link |
01:25:24.640
Do you think whatever the heck happens in cellular automata
link |
01:25:27.720
where interesting, complicated objects emerge,
link |
01:25:31.480
God is in there too?
link |
01:25:33.480
The emergence of love in this seemingly primitive universe?
link |
01:25:38.480
Well, that's the goal of creating a replicant
link |
01:25:42.600
is that they would love you and you would love them.
link |
01:25:47.600
There wouldn't be much point of doing it
link |
01:25:50.840
if that didn't happen.
link |
01:25:52.720
But all of it, I guess what I'm saying
link |
01:25:54.880
about cellular automata is it's primitive building blocks
link |
01:25:59.280
and they somehow create beautiful things.
link |
01:26:03.440
Is there some deep truth to that
link |
01:26:06.280
about how our universe works?
link |
01:26:07.960
Is the emergence from simple rules,
link |
01:26:11.160
beautiful, complex objects can emerge?
link |
01:26:14.080
Is that the thing that made us?
link |
01:26:16.680
Yeah, well. As we went through
link |
01:26:18.120
all the six phases of reality.
link |
01:26:21.560
That's a good way to look at it.
link |
01:26:23.660
It does make some point to the whole value
link |
01:26:27.320
of having a universe.
link |
01:26:31.400
Do you think about your own mortality?
link |
01:26:34.040
Are you afraid of it?
link |
01:26:36.240
Yes, but I keep going back to my idea
link |
01:26:41.240
of being able to expand human life quickly enough
link |
01:26:48.080
in advance of our getting there, longevity escape velocity,
link |
01:26:55.880
which we're not quite at yet,
link |
01:26:58.600
but I think we're actually pretty close,
link |
01:27:01.520
particularly with, for example, doing simulated biology.
link |
01:27:06.600
I think we can probably get there within,
link |
01:27:08.920
say, by the end of this decade, and that's my goal.
link |
01:27:12.800
Do you hope to achieve the longevity escape velocity?
link |
01:27:16.400
Do you hope to achieve immortality?
link |
01:27:20.900
Well, immortality is hard to say.
link |
01:27:22.960
I can't really come on your program saying I've done it.
link |
01:27:26.080
I've achieved immortality because it's never forever.
link |
01:27:32.480
A long time, a long time of living well.
link |
01:27:35.280
But we'd like to actually advance
link |
01:27:37.180
human life expectancy, advance my life expectancy
link |
01:27:41.000
more than a year every year,
link |
01:27:44.000
and I think we can get there within,
link |
01:27:45.820
by the end of this decade.
link |
01:27:47.800
How do you think we'd do it?
link |
01:27:49.440
So there's practical things in Transcend,
link |
01:27:53.640
the nine steps to living well forever, your book.
link |
01:27:56.200
You describe just that.
link |
01:27:58.360
There's practical things like health,
link |
01:28:00.520
exercise, all those things.
link |
01:28:02.280
Yeah, I mean, we live in a body
link |
01:28:03.920
that doesn't last forever.
link |
01:28:08.280
There's no reason why it can't, though,
link |
01:28:11.160
and we're discovering things, I think, that will extend it.
link |
01:28:17.640
But you do have to deal with,
link |
01:28:19.400
I mean, I've got various issues.
link |
01:28:23.240
Went to Mexico 40 years ago, developed salmonella.
link |
01:28:28.240
I created pancreatitis, which gave me
link |
01:28:33.060
a strange form of diabetes.
link |
01:28:37.380
It's not type one diabetes, because it's an autoimmune
link |
01:28:42.900
disorder that destroys your pancreas.
link |
01:28:44.860
I don't have that.
link |
01:28:46.780
But it's also not type two diabetes,
link |
01:28:48.700
because type two diabetes is your pancreas works fine,
link |
01:28:51.740
but your cells don't absorb the insulin well.
link |
01:28:55.740
I don't have that either.
link |
01:28:58.460
The pancreatitis I had partially damaged my pancreas,
link |
01:29:04.560
but it was a one time thing.
link |
01:29:06.100
It didn't continue, and I've learned now how to control it.
link |
01:29:11.620
But so that's just something that I had to do
link |
01:29:15.300
in order to continue to exist.
link |
01:29:18.540
Since your particular biological system,
link |
01:29:20.460
you had to figure out a few hacks,
link |
01:29:22.540
and the idea is that science would be able
link |
01:29:24.900
to do that much better, actually.
link |
01:29:26.420
Yeah, so I mean, I do spend a lot of time
link |
01:29:29.340
just tinkering with my own body to keep it going.
link |
01:29:34.240
So I do think I'll last till the end of this decade,
link |
01:29:37.740
and I think we'll achieve longevity, escape velocity.
link |
01:29:41.660
I think that we'll start with people
link |
01:29:43.540
who are very diligent about this.
link |
01:29:46.180
Eventually, it'll become sort of routine
link |
01:29:48.860
that people will be able to do it.
link |
01:29:51.400
So if you're talking about kids today,
link |
01:29:54.300
or even people in their 20s or 30s,
link |
01:29:56.700
that's really not a very serious problem.
link |
01:30:01.300
I have had some discussions with relatives
link |
01:30:05.100
who are like almost 100, and saying,
link |
01:30:10.340
well, we're working on it as quickly as possible.
link |
01:30:13.380
I don't know if that's gonna work.
link |
01:30:16.460
Is there a case, this is a difficult question,
link |
01:30:18.400
but is there a case to be made against living forever
link |
01:30:23.400
that a finite life, that mortality is a feature, not a bug,
link |
01:30:29.620
that living a shorter, so dying makes ice cream
link |
01:30:36.020
taste delicious, makes life intensely beautiful
link |
01:30:40.260
more than it otherwise might be?
link |
01:30:42.220
Most people believe that way, except if you present
link |
01:30:46.820
a death of anybody they care about or love,
link |
01:30:51.500
they find that extremely depressing.
link |
01:30:55.300
And I know people who feel that way
link |
01:30:58.420
20, 30, 40 years later, they still want them back.
link |
01:31:06.200
So I mean, death is not something to celebrate,
link |
01:31:11.820
but we've lived in a world where people just accept this.
link |
01:31:16.260
Life is short, you see it all the time on TV,
link |
01:31:18.340
oh, life's short, you have to take advantage of it
link |
01:31:21.340
and nobody accepts the fact that you could actually
link |
01:31:23.860
go beyond normal lifetimes.
link |
01:31:27.940
But anytime we talk about death or a death of a person,
link |
01:31:31.580
even one death is a terrible tragedy.
link |
01:31:35.420
If you have somebody that lives to 100 years old,
link |
01:31:39.000
we still love them in return.
link |
01:31:43.820
And there's no limitation to that.
link |
01:31:47.660
In fact, these kinds of trends are gonna provide
link |
01:31:52.000
greater and greater opportunity for everybody,
link |
01:31:54.700
even if we have more people.
link |
01:31:57.100
So let me ask about an alien species
link |
01:32:00.320
or a super intelligent AI 500 years from now
link |
01:32:03.060
that will look back and remember Ray Kurzweil version zero.
link |
01:32:11.140
Before the replicants spread,
link |
01:32:13.140
how do you hope they remember you
link |
01:32:17.020
in Hitchhiker's Guide to the Galaxy summary of Ray Kurzweil?
link |
01:32:21.420
What do you hope your legacy is?
link |
01:32:24.120
Well, I mean, I do hope to be around, so that's.
link |
01:32:26.740
Some version of you, yes.
link |
01:32:27.900
So.
link |
01:32:29.740
Do you think you'll be the same person around?
link |
01:32:32.140
I mean, am I the same person I was when I was 20 or 10?
link |
01:32:37.020
You would be the same person in that same way,
link |
01:32:39.780
but yes, we're different, we're different.
link |
01:32:44.420
All we have of that, all you have of that person
link |
01:32:46.900
is your memories, which are probably distorted in some way.
link |
01:32:53.700
Maybe you just remember the good parts,
link |
01:32:55.860
depending on your psyche.
link |
01:32:57.860
You might focus on the bad parts,
link |
01:32:59.740
might focus on the good parts.
link |
01:33:02.820
Right, but I mean, I still have a relationship
link |
01:33:06.500
to the way I was when I was earlier, when I was younger.
link |
01:33:11.860
How will you and the other super intelligent AIs
link |
01:33:14.220
remember you of today from 500 years ago?
link |
01:33:18.940
What do you hope to be remembered by this version of you
link |
01:33:22.860
before the singularity?
link |
01:33:25.640
Well, I think it's expressed well in my books,
link |
01:33:28.220
trying to create some new realities that people will accept.
link |
01:33:32.620
I mean, that's something that gives me great pleasure,
link |
01:33:40.320
and greater insight into what makes humans valuable.
link |
01:33:49.720
I'm not the only person who's tempted to comment on that.
link |
01:33:57.220
And optimism that permeates your work.
link |
01:34:00.700
Optimism about the future is ultimately that optimism
link |
01:34:04.700
paves the way for building a better future.
link |
01:34:06.780
Yeah, I agree with that.
link |
01:34:10.100
So you asked your dad about the meaning of life,
link |
01:34:15.360
and he said, love, let me ask you the same question.
link |
01:34:19.260
What's the meaning of life?
link |
01:34:21.200
Why are we here?
link |
01:34:22.900
This beautiful journey that we're on in phase four,
link |
01:34:26.900
reaching for phase five of this evolution
link |
01:34:32.540
and information processing, why?
link |
01:34:35.500
Well, I think I'd give the same answers as my father.
link |
01:34:42.020
Because if there were no love,
link |
01:34:43.860
and we didn't care about anybody,
link |
01:34:46.580
there'd be no point existing.
link |
01:34:49.540
Love is the meaning of life.
link |
01:34:51.460
The AI version of your dad had a good point.
link |
01:34:54.260
Well, I think that's a beautiful way to end it.
link |
01:34:57.820
Ray, thank you for your work.
link |
01:34:59.260
Thank you for being who you are.
link |
01:35:01.100
Thank you for dreaming about a beautiful future
link |
01:35:03.420
and creating it along the way.
link |
01:35:06.460
And thank you so much for spending
link |
01:35:09.340
your really valuable time with me today.
link |
01:35:10.900
This was awesome.
link |
01:35:12.260
It was my pleasure, and you have some great insights,
link |
01:35:16.300
both into me and into humanity as well, so I appreciate that.
link |
01:35:21.440
Thanks for listening to this conversation
link |
01:35:22.980
with Ray Kurzweil.
link |
01:35:24.340
To support this podcast,
link |
01:35:25.580
please check out our sponsors in the description.
link |
01:35:28.380
And now, let me leave you with some words
link |
01:35:30.420
from Isaac Asimov.
link |
01:35:32.820
It is change, continuous change, inevitable change
link |
01:35:37.660
that is the dominant factor in society today.
link |
01:35:41.060
No sensible decision can be made any longer
link |
01:35:43.860
without taking into account not only the world as it is,
link |
01:35:47.320
but the world as it will be.
link |
01:35:49.560
This, in turn, means that our statesmen,
link |
01:35:52.540
our businessmen, our everyman,
link |
01:35:55.340
must take on a science fictional way of thinking.
link |
01:35:58.460
Thank you for listening, and hope to see you next time.