back to index

Ben Goertzel: Artificial General Intelligence | Lex Fridman Podcast #103


small model | large model

link |
00:00:00.000
The following is a conversation with Ben Goertzel,
link |
00:00:03.000
one of the most interesting minds
link |
00:00:04.560
in the artificial intelligence community.
link |
00:00:06.680
He's the founder of SingularityNet,
link |
00:00:08.920
designer of OpenCog AI Framework,
link |
00:00:11.520
formerly a director of research
link |
00:00:13.220
at the Machine Intelligence Research Institute,
link |
00:00:15.720
and chief scientist of Hanson Robotics,
link |
00:00:18.440
the company that created the Sophia robot.
link |
00:00:21.000
He has been a central figure in the AGI community
link |
00:00:23.680
for many years, including in his organizing
link |
00:00:26.960
and contributing to the conference
link |
00:00:28.720
on artificial general intelligence,
link |
00:00:30.920
the 2020 version of which is actually happening this week,
link |
00:00:34.440
Wednesday, Thursday, and Friday.
link |
00:00:36.480
It's virtual and free.
link |
00:00:38.480
I encourage you to check out the talks,
link |
00:00:40.040
including by Yosha Bach from episode 101 of this podcast.
link |
00:00:45.160
Quick summary of the ads.
link |
00:00:46.600
Two sponsors, The Jordan Harbinger Show and Masterclass.
link |
00:00:51.040
Please consider supporting this podcast
link |
00:00:52.800
by going to jordanharbinger.com slash lex
link |
00:00:56.500
and signing up at masterclass.com slash lex.
link |
00:01:00.380
Click the links, buy all the stuff.
link |
00:01:02.840
It's the best way to support this podcast
link |
00:01:04.640
and the journey I'm on in my research and startup.
link |
00:01:08.840
This is the Artificial Intelligence Podcast.
link |
00:01:11.480
If you enjoy it, subscribe on YouTube,
link |
00:01:13.680
review it with five stars on Apple Podcast,
link |
00:01:15.940
support it on Patreon, or connect with me on Twitter
link |
00:01:18.920
at lexfriedman, spelled without the E, just F R I D M A N.
link |
00:01:23.920
As usual, I'll do a few minutes of ads now
link |
00:01:25.980
and never any ads in the middle
link |
00:01:27.340
that can break the flow of the conversation.
link |
00:01:29.980
This episode is supported by The Jordan Harbinger Show.
link |
00:01:33.340
Go to jordanharbinger.com slash lex.
link |
00:01:35.900
It's how he knows I sent you.
link |
00:01:37.740
On that page, there's links to subscribe to it
link |
00:01:40.140
on Apple Podcast, Spotify, and everywhere else.
link |
00:01:43.300
I've been binging on his podcast.
link |
00:01:45.100
Jordan is great.
link |
00:01:46.220
He gets the best out of his guests,
link |
00:01:47.900
dives deep, calls them out when it's needed,
link |
00:01:50.100
and makes the whole thing fun to listen to.
link |
00:01:52.260
He's interviewed Kobe Bryant, Mark Cuban,
link |
00:01:55.340
Neil deGrasse Tyson, Keira Kasparov, and many more.
link |
00:01:59.060
His conversation with Kobe is a reminder
link |
00:02:01.540
how much focus and hard work is required for greatness
link |
00:02:06.060
in sport, business, and life.
link |
00:02:09.540
I highly recommend the episode if you want to be inspired.
link |
00:02:12.420
Again, go to jordanharbinger.com slash lex.
link |
00:02:15.940
It's how Jordan knows I sent you.
link |
00:02:18.900
This show is sponsored by Master Class.
link |
00:02:21.300
Sign up at masterclass.com slash lex
link |
00:02:24.300
to get a discount and to support this podcast.
link |
00:02:27.660
When I first heard about Master Class,
link |
00:02:29.220
I thought it was too good to be true.
link |
00:02:31.060
For 180 bucks a year, you get an all access pass
link |
00:02:34.220
to watch courses from to list some of my favorites.
link |
00:02:37.540
Chris Hadfield on Space Exploration,
link |
00:02:39.780
Neil deGrasse Tyson on Scientific Thinking
link |
00:02:41.780
and Communication, Will Wright, creator of
link |
00:02:44.980
the greatest city building game ever, Sim City,
link |
00:02:47.940
and Sims on Space Exploration.
link |
00:02:50.700
Ben Sims on Game Design, Carlos Santana on Guitar,
link |
00:02:54.860
Keira Kasparov, the greatest chess player ever on chess,
link |
00:02:59.460
Daniel Negrano on Poker, and many more.
link |
00:03:01.980
Chris Hadfield explaining how rockets work
link |
00:03:04.660
and the experience of being launched into space alone
link |
00:03:07.300
is worth the money.
link |
00:03:08.700
Once again, sign up at masterclass.com slash lex
link |
00:03:12.020
to get a discount and to support this podcast.
link |
00:03:15.820
Now, here's my conversation with Ben Kurtzell.
link |
00:03:20.780
What books, authors, ideas had a lot of impact on you
link |
00:03:25.100
in your life in the early days?
link |
00:03:27.900
You know, what got me into AI and science fiction
link |
00:03:32.180
and such in the first place wasn't a book,
link |
00:03:34.580
but the original Star Trek TV show,
link |
00:03:37.020
which my dad watched with me like in its first run.
link |
00:03:39.860
It would have been 1968, 69 or something,
link |
00:03:42.700
and that was incredible because every show
link |
00:03:45.340
they visited a different alien civilization
link |
00:03:49.140
with different culture and weird mechanisms.
link |
00:03:51.300
But that got me into science fiction,
link |
00:03:55.020
and there wasn't that much science fiction
link |
00:03:57.180
to watch on TV at that stage,
link |
00:03:58.660
so that got me into reading the whole literature
link |
00:04:01.420
of science fiction, you know,
link |
00:04:03.340
from the beginning of the previous century until that time.
link |
00:04:07.500
And I mean, there was so many science fiction writers
link |
00:04:10.860
who were inspirational to me.
link |
00:04:12.420
I'd say if I had to pick two,
link |
00:04:14.820
it would have been Stanisław Lem, the Polish writer.
link |
00:04:18.820
Yeah, Solaris, and then he had a bunch
link |
00:04:22.020
of more obscure writings on superhuman AIs
link |
00:04:25.780
that were engineered.
link |
00:04:26.620
Solaris was sort of a superhuman,
link |
00:04:28.660
naturally occurring intelligence.
link |
00:04:31.540
Then Philip K. Dick, who, you know,
link |
00:04:34.780
ultimately my fandom for Philip K. Dick
link |
00:04:37.340
is one of the things that brought me together
link |
00:04:39.100
with David Hansen, my collaborator on robotics projects.
link |
00:04:43.740
So, you know, Stanisław Lem was very much an intellectual,
link |
00:04:47.620
right, so he had a very broad view of intelligence
link |
00:04:51.020
going beyond the human and into what I would call,
link |
00:04:54.420
you know, open ended superintelligence.
link |
00:04:56.900
The Solaris superintelligent ocean was intelligent,
link |
00:05:01.900
in some ways more generally intelligent than people,
link |
00:05:04.420
but in a complex and confusing way
link |
00:05:07.340
so that human beings could never quite connect to it,
link |
00:05:10.180
but it was still probably very, very smart.
link |
00:05:13.260
And then the Golem 4 supercomputer
link |
00:05:16.620
in one of Lem's books, this was engineered by people,
link |
00:05:20.420
but eventually it became very intelligent
link |
00:05:24.420
in a different direction than humans
link |
00:05:26.020
and decided that humans were kind of trivial,
link |
00:05:29.260
not that interesting.
link |
00:05:30.260
So it put some impenetrable shield around itself,
link |
00:05:35.300
shut itself off from humanity,
link |
00:05:36.700
and then issued some philosophical screed
link |
00:05:40.060
about the pathetic and hopeless nature of humanity
link |
00:05:44.540
and all human thought, and then disappeared.
link |
00:05:48.380
Now, Philip K. Dick, he was a bit different.
link |
00:05:51.140
He was human focused, right?
link |
00:05:52.460
His main thing was, you know, human compassion
link |
00:05:55.860
and the human heart and soul are going to be the constant
link |
00:05:59.540
that will keep us going through whatever aliens we discover
link |
00:06:03.620
or telepathy machines or super AIs or whatever it might be.
link |
00:06:08.620
So he didn't believe in reality,
link |
00:06:10.660
like the reality that we see may be a simulation
link |
00:06:13.740
or a dream or something else we can't even comprehend,
link |
00:06:17.100
but he believed in love and compassion
link |
00:06:19.100
as something persistent
link |
00:06:20.660
through the various simulated realities.
link |
00:06:22.460
So those two science fiction writers had a huge impact on me.
link |
00:06:26.740
Then a little older than that, I got into Dostoevsky
link |
00:06:30.300
and Friedrich Nietzsche and Rimbaud
link |
00:06:33.620
and a bunch of more literary type writing.
link |
00:06:36.900
Can we talk about some of those things?
link |
00:06:38.620
So on the Solaris side, Stanislaw Lem,
link |
00:06:43.180
this kind of idea of there being intelligences out there
link |
00:06:47.020
that are different than our own,
link |
00:06:49.540
do you think there are intelligences maybe all around us
link |
00:06:53.020
that we're not able to even detect?
link |
00:06:56.420
So this kind of idea of,
link |
00:06:58.700
maybe you can comment also on Stephen Wolfram
link |
00:07:01.580
thinking that there's computations all around us
link |
00:07:04.340
and we're just not smart enough to kind of detect
link |
00:07:07.460
their intelligence or appreciate their intelligence.
link |
00:07:10.380
Yeah, so my friend Hugo de Gares,
link |
00:07:13.540
who I've been talking to about these things
link |
00:07:15.780
for many decades, since the early 90s,
link |
00:07:19.300
he had an idea he called SIPI,
link |
00:07:21.740
the Search for Intraparticulate Intelligence.
link |
00:07:25.100
So the concept there was as AIs get smarter
link |
00:07:28.140
and smarter and smarter,
link |
00:07:30.820
assuming the laws of physics as we know them now
link |
00:07:33.660
are still what these super intelligences
link |
00:07:37.420
perceived to hold and are bound by,
link |
00:07:39.220
as they get smarter and smarter,
link |
00:07:40.420
they're gonna shrink themselves littler and littler
link |
00:07:42.980
because special relativity makes it
link |
00:07:45.380
so they can communicate
link |
00:07:47.220
between two spatially distant points.
link |
00:07:49.300
So they're gonna get smaller and smaller,
link |
00:07:50.780
but then ultimately, what does that mean?
link |
00:07:53.220
The minds of the super, super, super intelligences,
link |
00:07:56.500
they're gonna be packed into the interaction
link |
00:07:59.020
of elementary particles or quarks
link |
00:08:01.940
or the partons inside quarks or whatever it is.
link |
00:08:04.580
So what we perceive as random fluctuations
link |
00:08:07.620
on the quantum or sub quantum level
link |
00:08:09.740
may actually be the thoughts
link |
00:08:11.500
of the micro, micro, micro miniaturized super intelligences
link |
00:08:16.300
because there's no way we can tell random
link |
00:08:19.140
from structured but within algorithmic information
link |
00:08:21.620
more complex than our brains, right?
link |
00:08:23.100
We can't tell the difference.
link |
00:08:24.300
So what we think is random could be the thought processes
link |
00:08:27.020
of some really tiny super minds.
link |
00:08:29.980
And if so, there is not a damn thing we can do about it,
link |
00:08:34.020
except try to upgrade our intelligences
link |
00:08:37.180
and expand our minds so that we can perceive
link |
00:08:40.060
more of what's around us.
link |
00:08:41.300
But if those random fluctuations,
link |
00:08:43.980
like even if we go to like quantum mechanics,
link |
00:08:46.540
if that's actually super intelligent systems,
link |
00:08:51.220
aren't we then part of the super of super intelligence?
link |
00:08:54.620
Aren't we just like a finger of the entirety
link |
00:08:58.340
of the body of the super intelligent system?
link |
00:09:01.300
It could be, I mean, a finger is a strange metaphor.
link |
00:09:05.940
I mean, we...
link |
00:09:08.060
A finger is dumb is what I mean.
link |
00:09:10.700
But the finger is also useful
link |
00:09:12.260
and is controlled with intent by the brain
link |
00:09:14.780
whereas we may be much less than that, right?
link |
00:09:16.700
I mean, yeah, we may be just some random epiphenomenon
link |
00:09:21.340
that they don't care about too much.
link |
00:09:23.300
Like think about the shape of the crowd emanating
link |
00:09:26.380
from a sports stadium or something, right?
link |
00:09:28.260
There's some emergent shape to the crowd, it's there.
link |
00:09:31.580
You could take a picture of it, it's kind of cool.
link |
00:09:33.700
It's irrelevant to the main point of the sports event
link |
00:09:36.300
or where the people are going
link |
00:09:37.860
or what's on the minds of the people
link |
00:09:40.220
making that shape in the crowd, right?
link |
00:09:41.860
So we may just be some semi arbitrary higher level pattern
link |
00:09:47.660
popping out of a lower level
link |
00:09:49.700
hyper intelligent self organization.
link |
00:09:52.260
And I mean, so be it, right?
link |
00:09:55.860
I mean, that's one thing that...
link |
00:09:57.060
Yeah, I mean, the older I've gotten,
link |
00:09:59.500
the more respect I've achieved for our fundamental ignorance.
link |
00:10:04.220
I mean, mine and everybody else's.
link |
00:10:06.260
I mean, I look at my two dogs,
link |
00:10:08.820
two beautiful little toy poodles
link |
00:10:10.940
and they watch me sitting at the computer typing.
link |
00:10:14.780
They just think I'm sitting there wiggling my fingers
link |
00:10:16.980
to exercise them maybe or guarding the monitor on the desk
link |
00:10:19.980
that they have no idea that I'm communicating
link |
00:10:22.340
with other people halfway around the world,
link |
00:10:24.420
let alone creating complex algorithms
link |
00:10:27.660
running in RAM on some computer server
link |
00:10:30.220
in St. Petersburg or something, right?
link |
00:10:32.540
Although they're right there in the room with me.
link |
00:10:35.100
So what things are there right around us
link |
00:10:37.780
that we're just too stupid or close minded to comprehend?
link |
00:10:40.780
Probably quite a lot.
link |
00:10:42.140
Your very poodle could also be communicating
link |
00:10:46.220
across multiple dimensions with other beings
link |
00:10:49.980
and you're too unintelligent to understand
link |
00:10:53.180
the kind of communication mechanism they're going through.
link |
00:10:55.700
There have been various TV shows and science fiction novels,
link |
00:10:59.820
poisoning cats, dolphins, mice and whatnot
link |
00:11:03.220
are actually super intelligences here to observe that.
link |
00:11:07.220
I would guess as one or the other quantum physics founders
link |
00:11:12.580
said, those theories are not crazy enough to be true.
link |
00:11:15.500
The reality is probably crazier than that.
link |
00:11:17.660
Beautifully put.
link |
00:11:18.500
So on the human side, with Philip K. Dick
link |
00:11:22.020
and in general, where do you fall on this idea
link |
00:11:27.260
that love and just the basic spirit of human nature
link |
00:11:30.580
persists throughout these multiple realities?
link |
00:11:34.980
Are you on the side, like the thing that inspires you
link |
00:11:38.420
about artificial intelligence,
link |
00:11:40.980
is it the human side of somehow persisting
link |
00:11:46.740
through all of the different systems we engineer
link |
00:11:49.820
or is AI inspire you to create something
link |
00:11:53.340
that's greater than human, that's beyond human,
link |
00:11:55.500
that's almost nonhuman?
link |
00:11:59.140
I would say my motivation to create AGI
link |
00:12:02.820
comes from both of those directions actually.
link |
00:12:05.220
So when I first became passionate about AGI
link |
00:12:08.620
when I was, it would have been two or three years old
link |
00:12:11.420
after watching robots on Star Trek.
link |
00:12:14.700
I mean, then it was really a combination
link |
00:12:18.180
of intellectual curiosity, like can a machine really think,
link |
00:12:21.460
how would you do that?
link |
00:12:22.860
And yeah, just ambition to create something much better
link |
00:12:27.180
than all the clearly limited
link |
00:12:28.660
and fundamentally defective humans I saw around me.
link |
00:12:31.900
Then as I got older and got more enmeshed
link |
00:12:35.340
in the human world and got married, had children,
link |
00:12:38.780
saw my parents begin to age, I started to realize,
link |
00:12:41.900
well, not only will AGI let you go far beyond
link |
00:12:45.300
the limitations of the human,
link |
00:12:46.860
but it could also stop us from dying and suffering
link |
00:12:50.860
and feeling pain and tormenting ourselves mentally.
link |
00:12:54.980
So you can see AGI has amazing capability
link |
00:12:58.060
to do good for humans, as humans,
link |
00:13:01.380
alongside with its capability
link |
00:13:03.420
to go far, far beyond the human level.
link |
00:13:06.620
So I mean, both aspects are there,
link |
00:13:09.980
which makes it even more exciting and important.
link |
00:13:13.220
So you mentioned Dostoevsky and Nietzsche.
link |
00:13:15.500
Where did you pick up from those guys?
link |
00:13:17.060
I mean.
link |
00:13:18.980
That would probably go beyond the scope
link |
00:13:21.500
of a brief interview, certainly.
link |
00:13:24.340
I mean, both of those are amazing thinkers
link |
00:13:26.780
who one, will necessarily have
link |
00:13:29.020
a complex relationship with, right?
link |
00:13:32.060
So, I mean, Dostoevsky on the minus side,
link |
00:13:36.460
he's kind of a religious fanatic
link |
00:13:38.460
and he sort of helped squash the Russian nihilist movement,
link |
00:13:42.020
which was very interesting.
link |
00:13:43.140
Because what nihilism meant originally
link |
00:13:45.820
in that period of the mid, late 1800s in Russia
link |
00:13:48.660
was not taking anything fully 100% for granted.
link |
00:13:52.180
It was really more like what we'd call Bayesianism now,
link |
00:13:54.420
where you don't wanna adopt anything
link |
00:13:56.900
as a dogmatic certitude and always leave your mind open.
link |
00:14:01.060
And how Dostoevsky parodied nihilism
link |
00:14:04.420
was a bit different, right?
link |
00:14:06.660
He parodied as people who believe absolutely nothing.
link |
00:14:10.340
So they must assign an equal probability weight
link |
00:14:13.020
to every proposition, which doesn't really work.
link |
00:14:17.780
So on the one hand, I didn't really agree with Dostoevsky
link |
00:14:22.540
on his sort of religious point of view.
link |
00:14:26.140
On the other hand, if you look at his understanding
link |
00:14:29.660
of human nature and sort of the human mind
link |
00:14:32.660
and heart and soul, it's really unparalleled.
link |
00:14:37.100
He had an amazing view of how human beings construct a world
link |
00:14:42.100
for themselves based on their own understanding
link |
00:14:45.380
and their own mental predisposition.
link |
00:14:47.500
And I think if you look in the brothers Karamazov
link |
00:14:50.100
in particular, the Russian literary theorist Mikhail Bakhtin
link |
00:14:56.140
wrote about this as a polyphonic mode of fiction,
link |
00:14:59.580
which means it's not third person,
link |
00:15:02.300
but it's not first person from any one person really.
link |
00:15:05.020
There are many different characters in the novel
link |
00:15:07.020
and each of them is sort of telling part of the story
link |
00:15:10.020
from their own point of view.
link |
00:15:11.580
So the reality of the whole story is an intersection
link |
00:15:15.900
like synergetically of the many different characters
link |
00:15:19.020
world views.
link |
00:15:19.860
And that really, it's a beautiful metaphor
link |
00:15:23.220
and even a reflection I think of how all of us
link |
00:15:26.100
socially create our reality.
link |
00:15:27.700
Like each of us sees the world in a certain way.
link |
00:15:31.060
Each of us in a sense is making the world as we see it
link |
00:15:34.780
based on our own minds and understanding,
link |
00:15:37.620
but it's polyphony like in music
link |
00:15:40.980
where multiple instruments are coming together
link |
00:15:43.300
to create the sound.
link |
00:15:44.620
The ultimate reality that's created
link |
00:15:46.700
comes out of each of our subjective understandings,
link |
00:15:50.220
intersecting with each other.
link |
00:15:51.340
And that was one of the many beautiful things in Dostoevsky.
link |
00:15:55.660
So maybe a little bit to mention,
link |
00:15:57.980
you have a connection to Russia and the Soviet culture.
link |
00:16:02.260
I mean, I'm not sure exactly what the nature
link |
00:16:03.860
of the connection is, but at least the spirit
link |
00:16:06.180
of your thinking is in there.
link |
00:16:07.380
Well, my ancestry is three quarters Eastern European Jewish.
link |
00:16:12.740
So I mean, my three of my great grandparents
link |
00:16:16.740
emigrated to New York from Lithuania
link |
00:16:20.340
and sort of border regions of Poland,
link |
00:16:23.060
which are in and out of Poland
link |
00:16:24.980
in around the time of World War I.
link |
00:16:28.020
And they were socialists and communists as well as Jews,
link |
00:16:33.700
mostly Menshevik, not Bolshevik.
link |
00:16:35.940
And they sort of, they fled at just the right time
link |
00:16:39.260
to the US for their own personal reasons.
link |
00:16:41.260
And then almost all, or maybe all of my extended family
link |
00:16:45.580
that remained in Eastern Europe was killed
link |
00:16:47.220
either by Hitlands or Stalin's minions at some point.
link |
00:16:50.380
So the branch of the family that emigrated to the US
link |
00:16:53.580
was pretty much the only one.
link |
00:16:56.740
So how much of the spirit of the people
link |
00:16:58.700
is in your blood still?
link |
00:16:59.900
Like, when you look in the mirror, do you see,
link |
00:17:03.900
what do you see?
link |
00:17:04.860
Meat, I see a bag of meat that I want to transcend
link |
00:17:08.460
by uploading into some sort of superior reality.
link |
00:17:12.180
But very, I mean, yeah, very clearly,
link |
00:17:18.340
I mean, I'm not religious in a traditional sense,
link |
00:17:22.260
but clearly the Eastern European Jewish tradition
link |
00:17:27.260
was what I was raised in.
link |
00:17:28.780
I mean, there was, my grandfather, Leo Zwell,
link |
00:17:32.700
was a physical chemist who worked with Linus Pauling
link |
00:17:35.380
and a bunch of the other early greats in quantum mechanics.
link |
00:17:38.100
I mean, he was into X ray diffraction.
link |
00:17:41.220
He was on the material science side,
link |
00:17:42.940
an experimentalist rather than a theorist.
link |
00:17:45.420
His sister was also a physicist.
link |
00:17:47.700
And my father's father, Victor Gertzel,
link |
00:17:51.100
was a PhD in psychology who had the unenviable job
link |
00:17:57.100
of giving Soka therapy to the Japanese
link |
00:17:59.260
in internment camps in the US in World War II,
link |
00:18:03.100
like to counsel them why they shouldn't kill themselves,
link |
00:18:05.820
even though they'd had all their stuff taken away
link |
00:18:08.420
and been imprisoned for no good reason.
link |
00:18:10.300
So, I mean, yeah, there's a lot of Eastern European
link |
00:18:15.780
Jewishness in my background.
link |
00:18:18.060
One of my great uncles was, I guess,
link |
00:18:20.180
conductor of San Francisco Orchestra.
link |
00:18:22.420
So there's a lot of Mickey Salkind,
link |
00:18:25.620
bunch of music in there also.
link |
00:18:27.660
And clearly this culture was all about learning
link |
00:18:31.540
and understanding the world,
link |
00:18:34.860
and also not quite taking yourself too seriously
link |
00:18:38.820
while you do it, right?
link |
00:18:39.900
There's a lot of Yiddish humor in there.
link |
00:18:42.060
So I do appreciate that culture,
link |
00:18:45.220
although the whole idea that like the Jews
link |
00:18:47.580
are the chosen people of God
link |
00:18:49.020
never resonated with me too much.
link |
00:18:51.740
The graph of the Gertzel family,
link |
00:18:55.100
I mean, just the people I've encountered
link |
00:18:56.940
just doing some research and just knowing your work
link |
00:18:59.540
through the decades, it's kind of fascinating.
link |
00:19:03.580
Just the number of PhDs.
link |
00:19:06.380
Yeah, yeah, I mean, my dad is a sociology professor
link |
00:19:10.740
who recently retired from Rutgers University,
link |
00:19:15.060
but clearly that gave me a head start in life.
link |
00:19:18.540
I mean, my grandfather gave me
link |
00:19:20.260
all those quantum mechanics books
link |
00:19:21.620
when I was like seven or eight years old.
link |
00:19:24.220
I remember going through them,
link |
00:19:26.060
and it was all the old quantum mechanics
link |
00:19:28.020
like Rutherford Adams and stuff.
link |
00:19:30.420
So I got to the part of wave functions,
link |
00:19:32.860
which I didn't understand, although I was very bright kid.
link |
00:19:36.140
And I realized he didn't quite understand it either,
link |
00:19:38.660
but at least like he pointed me to some professor
link |
00:19:41.980
he knew at UPenn nearby who understood these things, right?
link |
00:19:45.340
So that's an unusual opportunity for a kid to have, right?
link |
00:19:49.620
My dad, he was programming Fortran
link |
00:19:52.380
when I was 10 or 11 years old
link |
00:19:53.900
on like HP 3000 mainframes at Rutgers University.
link |
00:19:57.660
So I got to do linear regression in Fortran
link |
00:20:00.900
on punch cards when I was in middle school, right?
link |
00:20:04.220
Because he was doing, I guess, analysis of demographic
link |
00:20:07.460
and sociology data.
link |
00:20:09.580
So yes, certainly that gave me a head start
link |
00:20:14.780
and a push towards science beyond what would have been
link |
00:20:17.220
the case with many, many different situations.
link |
00:20:19.700
When did you first fall in love with AI?
link |
00:20:22.220
Is it the programming side of Fortran?
link |
00:20:24.700
Is it maybe the sociology psychology
link |
00:20:27.260
that you picked up from your dad?
link |
00:20:28.300
Or is it the quantum mechanics?
link |
00:20:29.140
I fell in love with AI when I was probably three years old
link |
00:20:30.660
when I saw a robot on Star Trek.
link |
00:20:32.580
It was turning around in a circle going,
link |
00:20:34.620
error, error, error, error,
link |
00:20:36.660
because Spock and Kirk had tricked it
link |
00:20:39.540
into a mechanical breakdown by presenting it
link |
00:20:41.300
with a logical paradox.
link |
00:20:42.900
And I was just like, well, this makes no sense.
link |
00:20:45.660
This AI is very, very smart.
link |
00:20:47.540
It's been traveling all around the universe,
link |
00:20:49.620
but these people could trick it
link |
00:20:50.980
with a simple logical paradox.
link |
00:20:52.660
Like why, if the human brain can get beyond that paradox,
link |
00:20:57.020
why can't this AI?
link |
00:20:59.460
So I felt the screenwriters of Star Trek
link |
00:21:03.140
had misunderstood the nature of intelligence.
link |
00:21:06.060
And I complained to my dad about it,
link |
00:21:07.580
and he wasn't gonna say anything one way or the other.
link |
00:21:12.220
But before I was born, when my dad was at Antioch College
link |
00:21:18.460
in the middle of the US,
link |
00:21:20.860
he led a protest movement called SLAM,
link |
00:21:25.860
Student League Against Mortality.
link |
00:21:27.460
They were protesting against death,
link |
00:21:28.980
wandering across the campus.
link |
00:21:31.500
So he was into some futuristic things even back then,
link |
00:21:35.900
but whether AI could confront logical paradoxes or not,
link |
00:21:40.220
he didn't know.
link |
00:21:41.220
But when I, 10 years after that or something,
link |
00:21:44.780
I discovered Douglas Hofstadter's book,
link |
00:21:46.980
Gordalesh or Bach, and that was sort of to the same point of AI
link |
00:21:51.100
and paradox and logic, right?
link |
00:21:52.620
Because he was over and over
link |
00:21:54.460
with Gordal's incompleteness theorem,
link |
00:21:56.180
and can an AI really fully model itself reflexively
link |
00:22:00.500
or does that lead you into some paradox?
link |
00:22:02.820
Can the human mind truly model itself reflexively
link |
00:22:05.260
or does that lead you into some paradox?
link |
00:22:07.500
So I think that book, Gordalesh or Bach,
link |
00:22:10.660
which I think I read when it first came out,
link |
00:22:13.460
I would have been 12 years old or something.
link |
00:22:14.980
I remember it was like 16 hour day.
link |
00:22:17.100
I read it cover to cover and then reread it.
link |
00:22:19.780
I reread it after that,
link |
00:22:21.260
because there was a lot of weird things
link |
00:22:22.380
with little formal systems in there
link |
00:22:24.380
that were hard for me at the time.
link |
00:22:25.660
But that was the first book I read
link |
00:22:27.980
that gave me a feeling for AI as like a practical academic
link |
00:22:34.420
or engineering discipline that people were working in.
link |
00:22:37.380
Because before I read Gordalesh or Bach,
link |
00:22:40.060
I was into AI from the point of view of a science fiction fan.
link |
00:22:43.980
And I had the idea, well, it may be a long time
link |
00:22:47.460
before we can achieve immortality in superhuman AGI.
link |
00:22:50.420
So I should figure out how to build a spacecraft
link |
00:22:54.380
traveling close to the speed of light, go far away,
link |
00:22:57.060
then come back to the earth in a million years
link |
00:22:58.780
when technology is more advanced
link |
00:23:00.220
and we can build these things.
link |
00:23:01.700
Reading Gordalesh or Bach,
link |
00:23:03.580
while it didn't all ring true to me, a lot of it did,
link |
00:23:06.580
but I could see like there are smart people right now
link |
00:23:09.860
at various universities around me
link |
00:23:11.580
who are actually trying to work on building
link |
00:23:15.420
what I would now call AGI,
link |
00:23:16.980
although Hofstadter didn't call it that.
link |
00:23:19.020
So really it was when I read that book,
link |
00:23:21.100
which would have been probably middle school,
link |
00:23:23.540
that then I started to think,
link |
00:23:24.820
well, this is something that I could practically work on.
link |
00:23:29.020
Yeah, as opposed to flying away and waiting it out,
link |
00:23:31.660
you can actually be one of the people
link |
00:23:33.500
that actually builds the system.
link |
00:23:34.580
Yeah, exactly.
link |
00:23:35.420
And if you think about, I mean,
link |
00:23:36.740
I was interested in what we'd now call nanotechnology
link |
00:23:40.700
and in the human immortality and time travel,
link |
00:23:44.820
all the same cool things as every other,
link |
00:23:46.940
like science fiction loving kid.
link |
00:23:49.260
But AI seemed like if Hofstadter was right,
link |
00:23:52.700
you just figure out the right program,
link |
00:23:54.180
sit there and type it.
link |
00:23:55.060
Like you don't need to spin stars into weird configurations
link |
00:23:59.620
or get government approval to cut people up
link |
00:24:02.620
and fiddle with their DNA or something, right?
link |
00:24:05.020
It's just programming.
link |
00:24:06.180
And then of course that can achieve anything else.
link |
00:24:10.700
There's another book from back then,
link |
00:24:12.220
which was by Gerald Feinbaum,
link |
00:24:17.060
who was a physicist at Princeton.
link |
00:24:21.580
And that was the Prometheus Project.
link |
00:24:24.580
And this book was written in the late 1960s,
link |
00:24:26.700
though I encountered it in the mid 70s.
link |
00:24:28.780
But what this book said is in the next few decades,
link |
00:24:30.940
humanity is gonna create superhuman thinking machines,
link |
00:24:34.500
molecular nanotechnology and human immortality.
link |
00:24:37.460
And then the challenge we'll have is what to do with it.
link |
00:24:41.140
Do we use it to expand human consciousness
link |
00:24:43.020
in a positive direction?
link |
00:24:44.500
Or do we use it just to further vapid consumerism?
link |
00:24:49.860
And what he proposed was that the UN
link |
00:24:51.820
should do a survey on this.
link |
00:24:53.460
And the UN should send people out to every little village
link |
00:24:56.460
in remotest Africa or South America
link |
00:24:58.940
and explain to everyone what technology
link |
00:25:01.300
was gonna bring the next few decades
link |
00:25:03.020
and the choice that we had about how to use it.
link |
00:25:05.020
And let everyone on the whole planet vote
link |
00:25:07.780
about whether we should develop super AI nanotechnology
link |
00:25:11.740
and immortality for expanded consciousness
link |
00:25:15.900
or for rampant consumerism.
link |
00:25:18.220
And needless to say, that didn't quite happen.
link |
00:25:22.060
And I think this guy died in the mid 80s,
link |
00:25:24.180
so we didn't even see his ideas start
link |
00:25:25.900
to become more mainstream.
link |
00:25:28.220
But it's interesting, many of the themes I'm engaged with now
link |
00:25:31.620
from AGI and immortality,
link |
00:25:33.340
even to trying to democratize technology
link |
00:25:36.140
as I've been pushing forward with Singularity,
link |
00:25:38.100
my work in the blockchain world,
link |
00:25:40.020
many of these themes were there in Feinbaum's book
link |
00:25:43.620
in the late 60s even.
link |
00:25:47.940
And of course, Valentin Turchin, a Russian writer
link |
00:25:52.220
and a great Russian physicist who I got to know
link |
00:25:55.860
when we both lived in New York in the late 90s
link |
00:25:59.060
and early aughts.
link |
00:25:59.900
I mean, he had a book in the late 60s in Russia,
link |
00:26:03.380
which was the phenomenon of science,
link |
00:26:05.780
which laid out all these same things as well.
link |
00:26:10.220
And Val died in, I don't remember,
link |
00:26:12.740
2004 or five or something of Parkinson'sism.
link |
00:26:15.420
So yeah, it's easy for people to lose track now
link |
00:26:20.780
of the fact that the futurist and Singularitarian
link |
00:26:25.940
advanced technology ideas that are now almost mainstream
link |
00:26:29.740
are on TV all the time.
link |
00:26:30.900
I mean, these are not that new, right?
link |
00:26:34.100
They're sort of new in the history of the human species,
link |
00:26:37.100
but I mean, these were all around in fairly mature form
link |
00:26:41.100
in the middle of the last century,
link |
00:26:43.660
were written about quite articulately
link |
00:26:45.500
by fairly mainstream people
link |
00:26:47.340
who were professors at top universities.
link |
00:26:50.140
It's just until the enabling technologies
link |
00:26:52.940
got to a certain point, then you couldn't make it real.
link |
00:26:57.940
And even in the 70s, I was sort of seeing that
link |
00:27:02.820
and living through it, right?
link |
00:27:04.740
From Star Trek to Douglas Hofstadter,
link |
00:27:07.900
things were getting very, very practical
link |
00:27:09.660
from the late 60s to the late 70s.
link |
00:27:11.980
And the first computer I bought,
link |
00:27:15.020
you could only program with hexadecimal machine code
link |
00:27:17.580
and you had to solder it together.
link |
00:27:19.380
And then like a few years later, there's punch cards.
link |
00:27:23.420
And a few years later, you could get like Atari 400
link |
00:27:27.220
and Commodore VIC 20, and you could type on the keyboard
link |
00:27:30.300
and program in higher level languages
link |
00:27:32.820
alongside the assembly language.
link |
00:27:34.660
So these ideas have been building up a while.
link |
00:27:38.700
And I guess my generation got to feel them build up,
link |
00:27:42.980
which is different than people coming into the field now
link |
00:27:46.380
for whom these things have just been part of the ambience
link |
00:27:50.300
of culture for their whole career
link |
00:27:52.180
or even their whole life.
link |
00:27:54.140
Well, it's fascinating to think about there being all
link |
00:27:57.260
of these ideas kind of swimming, almost with the noise
link |
00:28:01.540
all around the world, all the different generations,
link |
00:28:04.380
and then some kind of nonlinear thing happens
link |
00:28:07.900
where they percolate up
link |
00:28:09.380
and capture the imagination of the mainstream.
link |
00:28:12.420
And that seems to be what's happening with AI now.
link |
00:28:14.780
I mean, Nietzsche, who you mentioned had the idea
link |
00:28:16.580
of the Superman, right?
link |
00:28:18.260
But he didn't understand enough about technology
link |
00:28:21.580
to think you could physically engineer a Superman
link |
00:28:24.860
by piecing together molecules in a certain way.
link |
00:28:28.180
He was a bit vague about how the Superman would appear,
link |
00:28:33.620
but he was quite deep at thinking
link |
00:28:35.820
about what the state of consciousness
link |
00:28:37.780
and the mode of cognition of a Superman would be.
link |
00:28:42.420
He was a very astute analyst of how the human mind
link |
00:28:47.820
constructs the illusion of a self,
link |
00:28:49.420
how it constructs the illusion of free will,
link |
00:28:52.140
how it constructs values like good and evil
link |
00:28:56.660
out of its own desire to maintain
link |
00:28:59.780
and advance its own organism.
link |
00:29:01.420
He understood a lot about how human minds work.
link |
00:29:04.020
Then he understood a lot
link |
00:29:05.660
about how post human minds would work.
link |
00:29:07.620
I mean, the Superman was supposed to be a mind
link |
00:29:10.260
that would basically have complete root access
link |
00:29:13.300
to its own brain and consciousness
link |
00:29:16.060
and be able to architect its own value system
link |
00:29:19.620
and inspect and fine tune all of its own biases.
link |
00:29:24.300
So that's a lot of powerful thinking there,
link |
00:29:27.340
which then fed in and sort of seeded
link |
00:29:29.340
all of postmodern continental philosophy
link |
00:29:32.180
and all sorts of things have been very valuable
link |
00:29:35.540
in development of culture and indirectly even of technology.
link |
00:29:39.740
But of course, without the technology there,
link |
00:29:42.140
it was all some quite abstract thinking.
link |
00:29:44.860
So now we're at a time in history
link |
00:29:46.940
when a lot of these ideas can be made real,
link |
00:29:51.740
which is amazing and scary, right?
link |
00:29:54.300
It's kind of interesting to think,
link |
00:29:56.020
what do you think Nietzsche would do
link |
00:29:57.180
if he was born a century later or transported through time?
link |
00:30:00.900
What do you think he would say about AI?
link |
00:30:02.980
I mean. Well, those are quite different.
link |
00:30:04.180
If he's born a century later or transported through time.
link |
00:30:07.260
Well, he'd be on like TikTok and Instagram
link |
00:30:09.580
and he would never write the great works he's written.
link |
00:30:11.940
So let's transport him through time.
link |
00:30:13.540
Maybe also Sprach Zarathustra would be a music video,
link |
00:30:16.460
right? I mean, who knows?
link |
00:30:19.660
Yeah, but if he was transported through time,
link |
00:30:21.700
do you think, that'd be interesting actually to go back.
link |
00:30:26.260
You just made me realize that it's possible to go back
link |
00:30:29.380
and read Nietzsche with an eye of,
link |
00:30:31.220
is there some thinking about artificial beings?
link |
00:30:34.700
I'm sure there he had inklings.
link |
00:30:37.780
I mean, with Frankenstein before him,
link |
00:30:40.500
I'm sure he had inklings of artificial beings
link |
00:30:42.900
somewhere in the text.
link |
00:30:44.060
It'd be interesting to try to read his work
link |
00:30:46.900
to see if Superman was actually an AGI system.
link |
00:30:55.820
Like if he had inklings of that kind of thinking.
link |
00:30:57.940
He didn't.
link |
00:30:58.780
He didn't.
link |
00:30:59.620
No, I would say not.
link |
00:31:01.100
I mean, he had a lot of inklings of modern cognitive science,
link |
00:31:06.460
which are very interesting.
link |
00:31:07.420
If you look in like the third part of the collection
link |
00:31:11.820
that's been titled The Will to Power.
link |
00:31:13.540
I mean, in book three there,
link |
00:31:15.660
there's very deep analysis of thinking processes,
link |
00:31:20.620
but he wasn't so much of a physical tinkerer type guy,
link |
00:31:27.140
right? He was very abstract.
link |
00:31:29.620
Do you think, what do you think about the will to power?
link |
00:31:32.780
Do you think human, what do you think drives humans?
link |
00:31:36.100
Is it?
link |
00:31:37.460
Oh, an unholy mix of things.
link |
00:31:39.500
I don't think there's one pure, simple,
link |
00:31:42.380
and elegant objective function driving humans by any means.
link |
00:31:47.380
What do you think, if we look at,
link |
00:31:50.700
I know it's hard to look at humans in an aggregate,
link |
00:31:53.260
but do you think overall humans are good?
link |
00:31:57.540
Or do we have both good and evil within us
link |
00:32:01.580
that depending on the circumstances,
link |
00:32:03.540
depending on whatever can percolate to the top?
link |
00:32:08.220
Good and evil are very ambiguous, complicated
link |
00:32:13.900
and in some ways silly concepts.
link |
00:32:15.900
But if we could dig into your question
link |
00:32:18.540
from a couple of directions.
link |
00:32:19.700
So I think if you look in evolution,
link |
00:32:23.420
humanity is shaped both by individual selection
link |
00:32:28.220
and what biologists would call group selection,
link |
00:32:30.940
like tribe level selection, right?
link |
00:32:32.740
So individual selection has driven us
link |
00:32:36.500
in a selfish DNA sort of way.
link |
00:32:38.780
So that each of us does to a certain approximation
link |
00:32:43.260
what will help us propagate our DNA to future generations.
link |
00:32:47.420
I mean, that's why I've got four kids so far
link |
00:32:50.700
and probably that's not the last one.
link |
00:32:53.900
On the other hand.
link |
00:32:55.020
I like the ambition.
link |
00:32:56.780
Tribal, like group selection means humans in a way
link |
00:33:00.740
will do what will advocate for the persistence of the DNA
link |
00:33:04.380
of their whole tribe or their social group.
link |
00:33:08.100
And in biology, you have both of these, right?
link |
00:33:11.740
And you can see, say an ant colony or a beehive,
link |
00:33:14.420
there's a lot of group selection
link |
00:33:15.940
in the evolution of those social animals.
link |
00:33:18.940
On the other hand, say a big cat
link |
00:33:21.460
or some very solitary animal,
link |
00:33:23.260
it's a lot more biased toward individual selection.
link |
00:33:26.540
Humans are an interesting balance.
link |
00:33:28.660
And I think this reflects itself
link |
00:33:31.540
in what we would view as selfishness versus altruism
link |
00:33:35.060
to some extent.
link |
00:33:36.780
So we just have both of those objective functions
link |
00:33:40.580
contributing to the makeup of our brains.
link |
00:33:43.780
And then as Nietzsche analyzed in his own way
link |
00:33:47.300
and others have analyzed in different ways,
link |
00:33:49.060
I mean, we abstract this as well,
link |
00:33:51.500
we have both good and evil within us, right?
link |
00:33:55.380
Because a lot of what we view as evil
link |
00:33:57.820
is really just selfishness.
link |
00:34:00.460
A lot of what we view as good is altruism,
link |
00:34:03.740
which means doing what's good for the tribe.
link |
00:34:07.220
And on that level,
link |
00:34:08.060
we have both of those just baked into us
link |
00:34:11.380
and that's how it is.
link |
00:34:13.180
Of course, there are psychopaths and sociopaths
link |
00:34:17.020
and people who get gratified by the suffering of others.
link |
00:34:21.340
And that's a different thing.
link |
00:34:25.260
Yeah, those are exceptions on the whole.
link |
00:34:27.500
But I think at core, we're not purely selfish,
link |
00:34:31.540
we're not purely altruistic, we are a mix
link |
00:34:35.180
and that's the nature of it.
link |
00:34:38.020
And we also have a complex constellation of values
link |
00:34:43.380
that are just very specific to our evolutionary history.
link |
00:34:49.180
Like we love waterways and mountains
link |
00:34:52.500
and the ideal place to put a house
link |
00:34:54.460
is in a mountain overlooking the water, right?
link |
00:34:56.340
And we care a lot about our kids
link |
00:35:00.580
and we care a little less about our cousins
link |
00:35:02.820
and even less about our fifth cousins.
link |
00:35:04.420
I mean, there are many particularities to human values,
link |
00:35:09.460
which whether they're good or evil
link |
00:35:11.900
depends on your perspective.
link |
00:35:15.820
Say, I spent a lot of time in Ethiopia in Addis Ababa
link |
00:35:19.660
where we have one of our AI development offices
link |
00:35:22.460
for my SingularityNet project.
link |
00:35:24.420
And when I walk through the streets in Addis,
link |
00:35:27.540
you know, there's people lying by the side of the road,
link |
00:35:31.460
like just living there by the side of the road,
link |
00:35:33.940
dying probably of curable diseases
link |
00:35:35.820
without enough food or medicine.
link |
00:35:37.940
And when I walk by them, you know, I feel terrible,
link |
00:35:39.980
I give them money.
link |
00:35:41.460
When I come back home to the developed world,
link |
00:35:45.100
they're not on my mind that much.
link |
00:35:46.620
I do donate some, but I mean,
link |
00:35:48.620
I also spend some of the limited money I have
link |
00:35:52.860
enjoying myself in frivolous ways
link |
00:35:54.700
rather than donating it to those people who are right now,
link |
00:35:58.100
like starving, dying and suffering on the roadside.
link |
00:36:01.020
So does that make me evil?
link |
00:36:03.180
I mean, it makes me somewhat selfish
link |
00:36:05.500
and somewhat altruistic.
link |
00:36:06.740
And we each balance that in our own way, right?
link |
00:36:10.940
So whether that will be true of all possible AGI's
link |
00:36:17.060
is a subtler question.
link |
00:36:19.300
So that's how humans are.
link |
00:36:21.340
So you have a sense, you kind of mentioned
link |
00:36:23.100
that there's a selfish,
link |
00:36:25.500
I'm not gonna bring up the whole Ayn Rand idea
link |
00:36:28.300
of selfishness being the core virtue.
link |
00:36:31.140
That's a whole interesting kind of tangent
link |
00:36:33.980
that I think we'll just distract ourselves on.
link |
00:36:36.420
I have to make one amusing comment.
link |
00:36:38.460
Sure.
link |
00:36:39.300
A comment that has amused me anyway.
link |
00:36:41.260
So the, yeah, I have extraordinary negative respect
link |
00:36:46.340
for Ayn Rand.
link |
00:36:47.820
Negative, what's a negative respect?
link |
00:36:50.220
But when I worked with a company called Genescient,
link |
00:36:54.740
which was evolving flies to have extraordinary long lives
link |
00:36:59.180
in Southern California.
link |
00:37:01.220
So we had flies that were evolved by artificial selection
link |
00:37:04.980
to have five times the lifespan of normal fruit flies.
link |
00:37:07.660
But the population of super long lived flies
link |
00:37:11.780
was physically sitting in a spare room
link |
00:37:14.060
at an Ayn Rand elementary school in Southern California.
link |
00:37:18.100
So that was just like,
link |
00:37:19.460
well, if I saw this in a movie, I wouldn't believe it.
link |
00:37:23.980
Well, yeah, the universe has a sense of humor
link |
00:37:26.020
in that kind of way.
link |
00:37:26.860
That fits in, humor fits in somehow
link |
00:37:28.900
into this whole absurd existence.
link |
00:37:30.620
But you mentioned the balance between selfishness
link |
00:37:33.820
and altruism as kind of being innate.
link |
00:37:37.220
Do you think it's possible
link |
00:37:38.140
that's kind of an emergent phenomena,
link |
00:37:42.380
those peculiarities of our value system?
link |
00:37:45.420
How much of it is innate?
link |
00:37:47.180
How much of it is something we collectively
link |
00:37:49.780
kind of like a Dostoevsky novel
link |
00:37:52.300
bring to life together as a civilization?
link |
00:37:54.540
I mean, the answer to nature versus nurture
link |
00:37:57.740
is usually both.
link |
00:37:58.860
And of course it's nature versus nurture
link |
00:38:01.820
versus self organization, as you mentioned.
link |
00:38:04.780
So clearly there are evolutionary roots
link |
00:38:08.460
to individual and group selection
link |
00:38:11.460
leading to a mix of selfishness and altruism.
link |
00:38:13.900
On the other hand,
link |
00:38:15.380
different cultures manifest that in different ways.
link |
00:38:19.780
Well, we all have basically the same biology.
link |
00:38:22.540
And if you look at sort of precivilized cultures,
link |
00:38:26.660
you have tribes like the Yanomamo in Venezuela,
link |
00:38:29.340
which their culture is focused on killing other tribes.
link |
00:38:35.340
And you have other Stone Age tribes
link |
00:38:37.620
that are mostly peaceful and have big taboos
link |
00:38:40.460
against violence.
link |
00:38:41.420
So you can certainly have a big difference
link |
00:38:43.900
in how culture manifests
link |
00:38:46.860
these innate biological characteristics,
link |
00:38:50.820
but still, there's probably limits
link |
00:38:54.740
that are given by our biology.
link |
00:38:56.740
I used to argue this with my great grandparents
link |
00:39:00.060
who were Marxists actually,
link |
00:39:01.500
because they believed in the withering away of the state.
link |
00:39:04.540
Like they believe that,
link |
00:39:06.900
as you move from capitalism to socialism to communism,
link |
00:39:10.660
people would just become more social minded
link |
00:39:13.420
so that a state would be unnecessary
link |
00:39:15.940
and everyone would give everyone else what they needed.
link |
00:39:20.940
Now, setting aside that
link |
00:39:23.140
that's not what the various Marxist experiments
link |
00:39:25.740
on the planet seem to be heading toward in practice.
link |
00:39:29.900
Just as a theoretical point,
link |
00:39:32.740
I was very dubious that human nature could go there.
link |
00:39:37.540
Like at that time when my great grandparents are alive,
link |
00:39:39.900
I was just like, you know, I'm a cynical teenager.
link |
00:39:43.300
I think humans are just jerks.
link |
00:39:45.980
The state is not gonna wither away.
link |
00:39:48.020
If you don't have some structure
link |
00:39:49.980
keeping people from screwing each other over,
link |
00:39:51.980
they're gonna do it.
link |
00:39:52.900
So now I actually don't quite see things that way.
link |
00:39:56.220
I mean, I think my feeling now subjectively
link |
00:39:59.900
is the culture aspect is more significant
link |
00:40:02.580
than I thought it was when I was a teenager.
link |
00:40:04.620
And I think you could have a human society
link |
00:40:08.260
that was dialed dramatically further toward,
link |
00:40:11.420
you know, self awareness, other awareness,
link |
00:40:13.700
compassion and sharing than our current society.
link |
00:40:16.980
And of course, greater material abundance helps,
link |
00:40:20.580
but to some extent material abundance
link |
00:40:23.480
is a subjective perception also
link |
00:40:25.380
because many Stone Age cultures perceive themselves
link |
00:40:28.260
as living in great material abundance
link |
00:40:30.540
that they had all the food and water they wanted,
link |
00:40:32.100
they lived in a beautiful place,
link |
00:40:33.500
that they had sex lives, that they had children.
link |
00:40:37.460
I mean, they had abundance without any factories, right?
link |
00:40:42.940
So I think humanity probably would be capable
link |
00:40:46.460
of fundamentally more positive and joy filled mode
link |
00:40:51.140
of social existence than what we have now.
link |
00:40:57.320
Clearly Marx didn't quite have the right idea
link |
00:40:59.500
about how to get there.
link |
00:41:01.800
I mean, he missed a number of key aspects
link |
00:41:05.660
of human society and its evolution.
link |
00:41:09.500
And if we look at where we are in society now,
link |
00:41:13.140
how to get there is a quite different question
link |
00:41:15.760
because there are very powerful forces
link |
00:41:18.100
pushing people in different directions
link |
00:41:21.080
than a positive, joyous, compassionate existence, right?
link |
00:41:26.380
So if we were tried to, you know,
link |
00:41:28.820
Elon Musk is dreams of colonizing Mars at the moment,
link |
00:41:32.820
so we maybe will have a chance to start a new civilization
link |
00:41:36.880
with a new governmental system.
link |
00:41:38.400
And certainly there's quite a bit of chaos.
link |
00:41:41.580
We're sitting now, I don't know what the date is,
link |
00:41:44.320
but this is June.
link |
00:41:46.860
There's quite a bit of chaos in all different forms
link |
00:41:49.260
going on in the United States and all over the world.
link |
00:41:52.060
So there's a hunger for new types of governments,
link |
00:41:55.560
new types of leadership, new types of systems.
link |
00:41:59.860
And so what are the forces at play
link |
00:42:01.980
and how do we move forward?
link |
00:42:04.140
Yeah, I mean, colonizing Mars, first of all,
link |
00:42:06.780
it's a super cool thing to do.
link |
00:42:08.980
We should be doing it.
link |
00:42:10.060
So you love the idea.
link |
00:42:11.540
Yeah, I mean, it's more important than making
link |
00:42:14.780
chocolatey or chocolates and sexier lingerie
link |
00:42:18.540
and many of the things that we spend
link |
00:42:21.020
a lot more resources on as a species, right?
link |
00:42:24.120
So I mean, we certainly should do it.
link |
00:42:26.480
I think the possible futures in which a Mars colony
link |
00:42:33.180
makes a critical difference for humanity are very few.
link |
00:42:38.040
I mean, I think, I mean, assuming we make a Mars colony
link |
00:42:42.220
and people go live there in a couple of decades,
link |
00:42:44.000
I mean, their supplies are gonna come from Earth.
link |
00:42:46.380
The money to make the colony came from Earth
link |
00:42:48.820
and whatever powers are supplying the goods there
link |
00:42:53.740
from Earth are gonna, in effect, be in control
link |
00:42:56.820
of that Mars colony.
link |
00:42:58.700
Of course, there are outlier situations
link |
00:43:02.060
where Earth gets nuked into oblivion
link |
00:43:06.460
and somehow Mars has been made self sustaining by that point
link |
00:43:10.780
and then Mars is what allows humanity to persist.
link |
00:43:14.220
But I think that those are very, very, very unlikely.
link |
00:43:19.740
You don't think it could be a first step on a long journey?
link |
00:43:23.020
Of course it's a first step on a long journey,
link |
00:43:24.740
which is awesome.
link |
00:43:27.140
I'm guessing the colonization of the rest
link |
00:43:30.980
of the physical universe will probably be done
link |
00:43:33.260
by AGI's that are better designed to live in space
link |
00:43:38.140
than by the meat machines that we are.
link |
00:43:41.840
But I mean, who knows?
link |
00:43:43.020
We may cryopreserve ourselves in some superior way
link |
00:43:45.860
to what we know now and like shoot ourselves out
link |
00:43:48.700
to Alpha Centauri and beyond.
link |
00:43:50.720
I mean, that's all cool.
link |
00:43:52.660
It's very interesting and it's much more valuable
link |
00:43:55.140
than most things that humanity is spending its resources on.
link |
00:43:58.860
On the other hand, with AGI, we can get to a singularity
link |
00:44:03.540
before the Mars colony becomes sustaining for sure,
link |
00:44:07.780
possibly before it's even operational.
link |
00:44:10.100
So your intuition is that that's the problem
link |
00:44:12.400
if we really invest resources and we can get to faster
link |
00:44:14.940
than a legitimate full self sustaining colonization of Mars.
link |
00:44:19.700
Yeah, and it's very clear that we will to me
link |
00:44:23.160
because there's so much economic value
link |
00:44:26.020
in getting from narrow AI toward AGI,
link |
00:44:29.460
whereas the Mars colony, there's less economic value
link |
00:44:33.380
until you get quite far out into the future.
link |
00:44:37.380
So I think that's very interesting.
link |
00:44:40.260
I just think it's somewhat off to the side.
link |
00:44:44.380
I mean, just as I think, say, art and music
link |
00:44:48.020
are very, very interesting and I wanna see resources
link |
00:44:51.860
go into amazing art and music being created.
link |
00:44:55.460
And I'd rather see that than a lot of the garbage
link |
00:44:59.580
that the society spends their money on.
link |
00:45:01.760
On the other hand, I don't think Mars colonization
link |
00:45:04.620
or inventing amazing new genres of music
link |
00:45:07.780
is not one of the things that is most likely
link |
00:45:11.000
to make a critical difference in the evolution
link |
00:45:13.900
of human or nonhuman life in this part of the universe
link |
00:45:18.340
over the next decade.
link |
00:45:19.820
Do you think AGI is really?
link |
00:45:21.620
AGI is by far the most important thing
link |
00:45:25.820
that's on the horizon.
link |
00:45:27.500
And then technologies that have direct ability
link |
00:45:31.620
to enable AGI or to accelerate AGI are also very important.
link |
00:45:37.260
For example, say, quantum computing.
link |
00:45:40.540
I don't think that's critical to achieve AGI,
link |
00:45:42.740
but certainly you could see how
link |
00:45:44.360
the right quantum computing architecture
link |
00:45:46.700
could massively accelerate AGI,
link |
00:45:49.280
similar other types of nanotechnology.
link |
00:45:52.260
Right now, the quest to cure aging and end disease
link |
00:45:57.860
while not in the big picture as important as AGI,
link |
00:46:02.100
of course, it's important to all of us as individual humans.
link |
00:46:07.380
And if someone made a super longevity pill
link |
00:46:11.600
and distributed it tomorrow, I mean,
link |
00:46:14.260
that would be huge and a much larger impact
link |
00:46:17.220
than a Mars colony is gonna have for quite some time.
link |
00:46:20.460
But perhaps not as much as an AGI system.
link |
00:46:23.300
No, because if you can make a benevolent AGI,
link |
00:46:27.060
then all the other problems are solved.
link |
00:46:28.700
I mean, if then the AGI can be,
link |
00:46:31.940
once it's as generally intelligent as humans,
link |
00:46:34.260
it can rapidly become massively more generally intelligent
link |
00:46:37.420
than humans.
link |
00:46:38.620
And then that AGI should be able to solve science
link |
00:46:42.540
and engineering problems much better than human beings,
link |
00:46:46.840
as long as it is in fact motivated to do so.
link |
00:46:49.700
That's why I said a benevolent AGI.
link |
00:46:52.740
There could be other kinds.
link |
00:46:54.020
Maybe it's good to step back a little bit.
link |
00:46:56.020
I mean, we've been using the term AGI.
link |
00:46:58.860
People often cite you as the creator,
link |
00:47:00.860
or at least the popularizer of the term AGI,
link |
00:47:03.060
artificial general intelligence.
link |
00:47:05.700
Can you tell the origin story of the term maybe?
link |
00:47:09.100
So yeah, I would say I launched the term AGI upon the world
link |
00:47:14.860
for what it's worth without ever fully being in love
link |
00:47:19.940
with the term.
link |
00:47:21.660
What happened is I was editing a book,
link |
00:47:25.380
and this process started around 2001 or two.
link |
00:47:27.860
I think the book came out 2005, finally.
link |
00:47:30.500
I was editing a book which I provisionally
link |
00:47:33.140
was titling Real AI.
link |
00:47:35.860
And I mean, the goal was to gather together
link |
00:47:38.840
fairly serious academicish papers
link |
00:47:41.700
on the topic of making thinking machines
link |
00:47:43.940
that could really think in the sense like people can,
link |
00:47:46.780
or even more broadly than people can, right?
link |
00:47:49.240
So then I was reaching out to other folks
link |
00:47:52.740
that I had encountered here or there
link |
00:47:54.060
who were interested in that,
link |
00:47:57.380
which included some other folks who I knew
link |
00:48:01.700
from the transhumist and singularitarian world,
link |
00:48:04.340
like Peter Vos, who has a company, AGI Incorporated,
link |
00:48:07.660
still in California, and included Shane Legge,
link |
00:48:13.100
who had worked for me at my company, WebMind,
link |
00:48:15.700
in New York in the late 90s,
link |
00:48:17.580
who by now has become rich and famous.
link |
00:48:20.500
He was one of the cofounders of Google DeepMind.
link |
00:48:22.780
But at that time, Shane was,
link |
00:48:25.320
I think he may have just started doing his PhD
link |
00:48:31.800
with Marcus Hooter, who at that time
link |
00:48:35.900
hadn't yet published his book, Universal AI,
link |
00:48:38.680
which sort of gives a mathematical foundation
link |
00:48:41.040
for artificial general intelligence.
link |
00:48:43.400
So I reached out to Shane and Marcus and Peter Vos
link |
00:48:46.140
and Pei Wang, who was another former employee of mine
link |
00:48:49.480
who had been Douglas Hofstadter's PhD student
link |
00:48:51.880
who had his own approach to AGI,
link |
00:48:53.280
and a bunch of some Russian folks reached out to these guys
link |
00:48:58.040
and they contributed papers for the book.
link |
00:49:01.360
But that was my provisional title, but I never loved it
link |
00:49:04.440
because in the end, I was doing some,
link |
00:49:09.320
what we would now call narrow AI as well,
link |
00:49:12.120
like applying machine learning to genomics data
link |
00:49:14.640
or chat data for sentiment analysis.
link |
00:49:17.920
I mean, that work is real.
link |
00:49:19.240
And in a sense, it's really AI.
link |
00:49:22.760
It's just a different kind of AI.
link |
00:49:26.000
Ray Kurzweil wrote about narrow AI versus strong AI,
link |
00:49:31.160
but that seemed weird to me because first of all,
link |
00:49:35.040
narrow and strong are not antennas.
link |
00:49:36.680
That's right.
link |
00:49:38.720
But secondly, strong AI was used
link |
00:49:41.940
in the cognitive science literature
link |
00:49:43.360
to mean the hypothesis that digital computer AIs
link |
00:49:46.640
could have true consciousness like human beings.
link |
00:49:50.140
So there was already a meaning to strong AI,
link |
00:49:52.540
which was complexly different, but related, right?
link |
00:49:56.440
So we were tossing around on an email list
link |
00:50:00.520
whether what title it should be.
link |
00:50:03.200
And so we talked about narrow AI, broad AI, wide AI,
link |
00:50:07.560
narrow AI, general AI.
link |
00:50:09.760
And I think it was either Shane Legge or Peter Vos
link |
00:50:15.880
on the private email discussion we had.
link |
00:50:18.120
He said, but why don't we go
link |
00:50:18.960
with AGI, artificial general intelligence?
link |
00:50:21.800
And Pei Wang wanted to do GAI,
link |
00:50:24.280
general artificial intelligence,
link |
00:50:25.760
because in Chinese it goes in that order.
link |
00:50:27.880
But we figured gay wouldn't work
link |
00:50:30.200
in US culture at that time, right?
link |
00:50:33.240
So we went with the AGI.
link |
00:50:37.360
We used it for the title of that book.
link |
00:50:39.520
And part of Peter and Shane's reasoning
link |
00:50:43.460
was you have the G factor in psychology,
link |
00:50:45.460
which is IQ, general intelligence, right?
link |
00:50:47.480
So you have a meaning of GI, general intelligence,
link |
00:50:51.160
in psychology, so then you're looking like artificial GI.
link |
00:50:55.360
So then we use that for the title of the book.
link |
00:51:00.400
And so I think maybe both Shane and Peter
link |
00:51:04.040
think they invented the term,
link |
00:51:05.200
but then later after the book was published,
link |
00:51:08.320
this guy, Mark Guberd, came up to me and he's like,
link |
00:51:11.640
well, I published an essay with the term AGI
link |
00:51:14.800
in like 1997 or something.
link |
00:51:17.120
And so I'm just waiting for some Russian to come out
link |
00:51:20.520
and say they published that in 1953, right?
link |
00:51:23.400
I mean, that term is not dramatically innovative
link |
00:51:27.800
or anything.
link |
00:51:28.640
It's one of these obvious in hindsight things,
link |
00:51:31.560
which is also annoying in a way,
link |
00:51:34.880
because Joshua Bach, who you interviewed,
link |
00:51:39.500
is a close friend of mine.
link |
00:51:40.400
He likes the term synthetic intelligence,
link |
00:51:43.240
which I like much better,
link |
00:51:44.300
but it hasn't actually caught on, right?
link |
00:51:47.080
Because I mean, artificial is a bit off to me
link |
00:51:51.800
because artifice is like a tool or something,
link |
00:51:54.640
but not all AGI's are gonna be tools.
link |
00:51:57.760
I mean, they may be now,
link |
00:51:58.700
but we're aiming toward making them agents
link |
00:52:00.600
rather than tools.
link |
00:52:02.800
And in a way, I don't like the distinction
link |
00:52:04.840
between artificial and natural,
link |
00:52:07.200
because I mean, we're part of nature also
link |
00:52:09.360
and machines are part of nature.
link |
00:52:12.160
I mean, you can look at evolved versus engineered,
link |
00:52:14.840
but that's a different distinction.
link |
00:52:17.160
Then it should be engineered general intelligence, right?
link |
00:52:20.000
And then general, well,
link |
00:52:21.920
if you look at Marcus Hooter's book,
link |
00:52:24.600
universally, what he argues there is,
link |
00:52:28.240
within the domain of computation theory,
link |
00:52:30.520
which is limited, but interesting.
link |
00:52:31.920
So if you assume computable environments
link |
00:52:33.680
or computable reward functions,
link |
00:52:35.600
then he articulates what would be
link |
00:52:37.560
a truly general intelligence,
link |
00:52:40.040
a system called AIXI, which is quite beautiful.
link |
00:52:43.160
AIXI, and that's the middle name
link |
00:52:46.280
of my latest child, actually, is it?
link |
00:52:49.360
What's the first name?
link |
00:52:50.200
First name is QORXI, Q O R X I,
link |
00:52:52.400
which my wife came up with,
link |
00:52:53.780
but that's an acronym for quantum organized rational
link |
00:52:57.320
expanding intelligence, and his middle name is Xiphonies,
link |
00:53:03.120
actually, which means the former principal underlying AIXI.
link |
00:53:08.340
But in any case.
link |
00:53:09.480
You're giving Elon Musk's new child a run for his money.
link |
00:53:12.160
Well, I did it first.
link |
00:53:13.800
He copied me with this new freakish name,
link |
00:53:17.320
but now if I have another baby,
link |
00:53:18.600
I'm gonna have to outdo him.
link |
00:53:20.600
It's becoming an arms race of weird, geeky baby names.
link |
00:53:24.560
We'll see what the babies think about it, right?
link |
00:53:26.840
But I mean, my oldest son, Zarathustra, loves his name,
link |
00:53:30.220
and my daughter, Sharazad, loves her name.
link |
00:53:33.800
So far, basically, if you give your kids weird names.
link |
00:53:36.960
They live up to it.
link |
00:53:37.840
Well, you're obliged to make the kids weird enough
link |
00:53:39.800
that they like the names, right?
link |
00:53:42.000
It directs their upbringing in a certain way.
link |
00:53:43.920
But yeah, anyway, I mean, what Marcus showed in that book
link |
00:53:47.680
is that a truly general intelligence
link |
00:53:50.560
theoretically is possible,
link |
00:53:51.800
but would take infinite computing power.
link |
00:53:53.840
So then the artificial is a little off.
link |
00:53:56.360
The general is not really achievable within physics
link |
00:53:59.800
as we know it.
link |
00:54:01.280
And I mean, physics as we know it may be limited,
link |
00:54:03.520
but that's what we have to work with now.
link |
00:54:05.300
Intelligence.
link |
00:54:06.140
Infinitely general, you mean,
link |
00:54:07.360
like information processing perspective, yeah.
link |
00:54:10.440
Yeah, intelligence is not very well defined either, right?
link |
00:54:14.760
I mean, what does it mean?
link |
00:54:16.760
I mean, in AI now, it's fashionable to look at it
link |
00:54:19.560
as maximizing an expected reward over the future.
link |
00:54:23.320
But that sort of definition is pathological in various ways.
link |
00:54:27.800
And my friend David Weinbaum, AKA Weaver,
link |
00:54:31.320
he had a beautiful PhD thesis on open ended intelligence,
link |
00:54:34.840
trying to conceive intelligence in a...
link |
00:54:36.880
Without a reward.
link |
00:54:38.240
Yeah, he's just looking at it differently.
link |
00:54:40.120
He's looking at complex self organizing systems
link |
00:54:42.680
and looking at an intelligent system
link |
00:54:44.640
as being one that revises and grows
link |
00:54:47.600
and improves itself in conjunction with its environment
link |
00:54:51.740
without necessarily there being one objective function
link |
00:54:54.880
it's trying to maximize.
link |
00:54:56.080
Although over certain intervals of time,
link |
00:54:58.520
it may act as if it's optimizing
link |
00:54:59.960
a certain objective function.
link |
00:55:01.360
Very much Solaris from Stanislav Lem's novels, right?
link |
00:55:04.580
So yeah, the point is artificial, general and intelligence.
link |
00:55:07.880
Don't work.
link |
00:55:08.720
They're all bad.
link |
00:55:09.540
On the other hand, everyone knows what AI is.
link |
00:55:12.040
And AGI seems immediately comprehensible
link |
00:55:15.880
to people with a technical background.
link |
00:55:17.520
So I think that the term has served
link |
00:55:19.360
as sociological function.
link |
00:55:20.720
And now it's out there everywhere, which baffles me.
link |
00:55:24.720
It's like KFC.
link |
00:55:25.800
I mean, that's it.
link |
00:55:27.080
We're stuck with AGI probably for a very long time
link |
00:55:30.200
until AGI systems take over and rename themselves.
link |
00:55:33.640
Yeah.
link |
00:55:34.480
And then we'll be biological.
link |
00:55:36.160
We're stuck with GPUs too,
link |
00:55:37.560
which mostly have nothing to do with graphics.
link |
00:55:39.320
Any more, right?
link |
00:55:40.520
I wonder what the AGI system will call us humans.
link |
00:55:43.260
That was maybe.
link |
00:55:44.280
Grandpa.
link |
00:55:45.120
Yeah.
link |
00:55:45.960
Yeah.
link |
00:55:46.800
GPs.
link |
00:55:47.620
Yeah.
link |
00:55:48.460
Grandpa processing unit, yeah.
link |
00:55:50.320
Biological grandpa processing units.
link |
00:55:52.120
Yeah.
link |
00:55:54.280
Okay, so maybe also just a comment on AGI representing
link |
00:56:00.580
before even the term existed,
link |
00:56:02.160
representing a kind of community.
link |
00:56:04.640
You've talked about this in the past,
link |
00:56:06.240
sort of AI is coming in waves,
link |
00:56:08.340
but there's always been this community of people
link |
00:56:10.440
who dream about creating general human level
link |
00:56:15.160
super intelligence systems.
link |
00:56:19.000
Can you maybe give your sense of the history
link |
00:56:21.880
of this community as it exists today,
link |
00:56:24.280
as it existed before this deep learning revolution
link |
00:56:26.720
all throughout the winters and the summers of AI?
link |
00:56:29.520
Sure.
link |
00:56:30.340
First, I would say as a side point,
link |
00:56:33.500
the winters and summers of AI are greatly exaggerated
link |
00:56:37.840
by Americans and in that,
link |
00:56:40.960
if you look at the publication record
link |
00:56:43.600
of the artificial intelligence community
link |
00:56:46.400
since say the 1950s,
link |
00:56:48.480
you would find a pretty steady growth
link |
00:56:51.360
in advance of ideas and papers.
link |
00:56:53.980
And what's thought of as an AI winter or summer
link |
00:56:57.720
was sort of how much money is the US military
link |
00:57:00.480
pumping into AI, which was meaningful.
link |
00:57:04.640
On the other hand, there was AI going on in Germany,
link |
00:57:06.960
UK and in Japan and in Russia, all over the place,
link |
00:57:10.960
while US military got more and less enthused about AI.
link |
00:57:16.300
So, I mean.
link |
00:57:17.560
That happened to be, just for people who don't know,
link |
00:57:20.200
the US military happened to be the main source
link |
00:57:22.840
of funding for AI research.
link |
00:57:24.500
So another way to phrase that is it's up and down
link |
00:57:27.480
of funding for artificial intelligence research.
link |
00:57:31.080
And I would say the correlation between funding
link |
00:57:34.600
and intellectual advance was not 100%, right?
link |
00:57:38.120
Because I mean, in Russia, as an example, or in Germany,
link |
00:57:42.120
there was less dollar funding than in the US,
link |
00:57:44.840
but many foundational ideas were laid out,
link |
00:57:48.160
but it was more theory than implementation, right?
link |
00:57:50.880
And US really excelled at sort of breaking through
link |
00:57:54.600
from theoretical papers to working implementations,
link |
00:58:00.200
which did go up and down somewhat
link |
00:58:03.020
with US military funding,
link |
00:58:04.320
but still, I mean, you can look in the 1980s,
link |
00:58:07.440
Dietrich Derner in Germany had self driving cars
link |
00:58:10.400
on the Autobahn, right?
link |
00:58:11.440
And I mean, it was a little early
link |
00:58:15.600
with regard to the car industry,
link |
00:58:16.920
so it didn't catch on such as has happened now.
link |
00:58:20.200
But I mean, that whole advancement
link |
00:58:22.960
of self driving car technology in Germany
link |
00:58:25.900
was pretty much independent of AI military summers
link |
00:58:29.720
and winters in the US.
link |
00:58:31.040
So there's been more going on in AI globally
link |
00:58:34.480
than not only most people on the planet realize,
link |
00:58:37.120
but then most new AI PhDs realize
link |
00:58:40.080
because they've come up within a certain sub field of AI
link |
00:58:44.600
and haven't had to look so much beyond that.
link |
00:58:47.680
But I would say when I got my PhD in 1989 in mathematics,
link |
00:58:54.300
I was interested in AI already.
link |
00:58:56.000
In Philadelphia.
link |
00:58:56.840
Yeah, I started at NYU, then I transferred to Philadelphia
link |
00:59:00.920
to Temple University, good old North Philly.
link |
00:59:03.960
North Philly.
link |
00:59:04.800
Yeah, yeah, yeah, the pearl of the US.
link |
00:59:09.280
You never stopped at a red light then
link |
00:59:10.920
because you were afraid if you stopped at a red light,
link |
00:59:12.760
someone will carjack you.
link |
00:59:13.760
So you just drive through every red light.
link |
00:59:15.960
Yeah.
link |
00:59:18.200
Every day driving or bicycling to Temple from my house
link |
00:59:20.940
was like a new adventure.
link |
00:59:24.280
But yeah, the reason I didn't do a PhD in AI
link |
00:59:27.520
was what people were doing in the academic AI field then,
link |
00:59:30.860
was just astoundingly boring and seemed wrong headed to me.
link |
00:59:34.880
It was really like rule based expert systems
link |
00:59:38.060
and production systems.
link |
00:59:39.360
And actually I loved mathematical logic.
link |
00:59:42.080
I had nothing against logic as the cognitive engine for an AI,
link |
00:59:45.840
but the idea that you could type in the knowledge
link |
00:59:48.920
that AI would need to think seemed just completely stupid
link |
00:59:52.720
and wrong headed to me.
link |
00:59:55.380
I mean, you can use logic if you want,
link |
00:59:57.400
but somehow the system has got to be...
link |
01:00:00.160
Automated.
link |
01:00:01.000
Learning, right?
link |
01:00:01.840
It should be learning from experience.
link |
01:00:03.800
And the AI field then was not interested
link |
01:00:06.120
in learning from experience.
link |
01:00:08.320
I mean, some researchers certainly were.
link |
01:00:11.020
I mean, I remember in mid eighties,
link |
01:00:13.960
I discovered a book by John Andreas,
link |
01:00:17.160
which was, it was about a reinforcement learning system
link |
01:00:21.920
called PURRDASHPUSS, which was an acronym
link |
01:00:27.080
that I can't even remember what it was for,
link |
01:00:28.640
but purpose anyway.
link |
01:00:30.400
But he, I mean, that was a system
link |
01:00:32.000
that was supposed to be an AGI
link |
01:00:34.360
and basically by some sort of fancy
link |
01:00:38.120
like Markov decision process learning,
link |
01:00:41.000
it was supposed to learn everything
link |
01:00:43.440
just from the bits coming into it
link |
01:00:44.880
and learn to maximize its reward
link |
01:00:46.720
and become intelligent, right?
link |
01:00:49.080
So that was there in academia back then,
link |
01:00:51.800
but it was like isolated, scattered, weird people.
link |
01:00:55.240
But all these isolated, scattered, weird people
link |
01:00:57.440
in that period, I mean, they laid the intellectual grounds
link |
01:01:01.280
for what happened later.
link |
01:01:02.120
So you look at John Andreas at University of Canterbury
link |
01:01:05.300
with his PURRDASHPUSS reinforcement learning Markov system.
link |
01:01:09.720
He was the PhD supervisor for John Cleary in New Zealand.
link |
01:01:14.080
Now, John Cleary worked with me
link |
01:01:17.080
when I was at Waikato University in 1993 in New Zealand.
link |
01:01:21.680
And he worked with Ian Whitten there
link |
01:01:23.900
and they launched WEKA,
link |
01:01:25.940
which was the first open source machine learning toolkit,
link |
01:01:29.840
which was launched in, I guess, 93 or 94
link |
01:01:33.520
when I was at Waikato University.
link |
01:01:35.160
Written in Java, unfortunately.
link |
01:01:36.480
Written in Java, which was a cool language back then.
link |
01:01:39.620
I guess it's still, well, it's not cool anymore,
link |
01:01:41.720
but it's powerful.
link |
01:01:43.280
I find, like most programmers now,
link |
01:01:45.760
I find Java unnecessarily bloated,
link |
01:01:48.820
but back then it was like Java or C++ basically.
link |
01:01:52.020
And Java was easier for students.
link |
01:01:55.760
Amusingly, a lot of the work on WEKA
link |
01:01:57.760
when we were in New Zealand was funded by a US,
link |
01:02:01.200
sorry, a New Zealand government grant
link |
01:02:03.880
to use machine learning
link |
01:02:05.440
to predict the menstrual cycles of cows.
link |
01:02:08.240
So in the US, all the grant funding for AI
link |
01:02:10.440
was about how to kill people or spy on people.
link |
01:02:13.600
In New Zealand, it's all about cows or kiwi fruits, right?
link |
01:02:16.400
Yeah.
link |
01:02:17.560
So yeah, anyway, I mean, John Andreas
link |
01:02:20.560
had his probability theory based reinforcement learning,
link |
01:02:24.320
proto AGI.
link |
01:02:25.780
John Cleary was trying to do much more ambitious,
link |
01:02:29.400
probabilistic AGI systems.
link |
01:02:31.820
Now, John Cleary helped do WEKA,
link |
01:02:36.160
which is the first open source machine learning toolkit.
link |
01:02:39.360
So the predecessor for TensorFlow and Torch
link |
01:02:41.520
and all these things.
link |
01:02:43.040
Also, Shane Legg was at Waikato
link |
01:02:46.800
working with John Cleary and Ian Witten
link |
01:02:50.240
and this whole group.
link |
01:02:51.500
And then working with my own companies,
link |
01:02:55.800
my company, WebMind, an AI company I had in the late 90s
link |
01:02:59.840
with a team there at Waikato University,
link |
01:03:02.320
which is how Shane got his head full of AGI,
link |
01:03:05.360
which led him to go on
link |
01:03:06.440
and with Demis Hassabis found DeepMind.
link |
01:03:08.660
So what you can see through that lineage is,
link |
01:03:11.060
you know, in the 80s and 70s,
link |
01:03:12.580
John Andreas was trying to build probabilistic
link |
01:03:14.800
reinforcement learning AGI systems.
link |
01:03:17.200
The technology, the computers just weren't there to support
link |
01:03:19.680
his ideas were very similar to what people are doing now.
link |
01:03:23.920
But, you know, although he's long since passed away
link |
01:03:27.720
and didn't become that famous outside of Canterbury,
link |
01:03:30.940
I mean, the lineage of ideas passed on from him
link |
01:03:33.720
to his students, to their students,
link |
01:03:35.140
you can go trace directly from there to me
link |
01:03:37.920
and to DeepMind, right?
link |
01:03:39.480
So that there was a lot going on in AGI
link |
01:03:42.180
that did ultimately lay the groundwork
link |
01:03:46.460
for what we have today, but there wasn't a community, right?
link |
01:03:48.560
And so when I started trying to pull together
link |
01:03:53.520
an AGI community, it was in the, I guess,
link |
01:03:56.920
the early aughts when I was living in Washington, D.C.
link |
01:04:00.400
and making a living doing AI consulting
link |
01:04:03.440
for various U.S. government agencies.
link |
01:04:07.080
And I organized the first AGI workshop in 2006.
link |
01:04:13.200
And I mean, it wasn't like it was literally
link |
01:04:15.780
in my basement or something.
link |
01:04:17.000
I mean, it was in the conference room at the Marriott
link |
01:04:19.320
in Bethesda, it's not that edgy or underground,
link |
01:04:23.200
unfortunately, but still.
link |
01:04:25.000
How many people attended?
link |
01:04:25.840
About 60 or something.
link |
01:04:27.600
That's not bad.
link |
01:04:28.480
I mean, D.C. has a lot of AI going on,
link |
01:04:30.780
probably until the last five or 10 years,
link |
01:04:34.200
much more than Silicon Valley, although it's just quiet
link |
01:04:37.800
because of the nature of what happens in D.C.
link |
01:04:41.280
Their business isn't driven by PR.
link |
01:04:43.600
Mostly when something starts to work really well,
link |
01:04:46.140
it's taken black and becomes even more quiet, right?
link |
01:04:49.640
But yeah, the thing is that really had the feeling
link |
01:04:52.880
of a group of starry eyed mavericks huddled in a basement,
link |
01:04:58.400
like plotting how to overthrow the narrow AI establishment.
link |
01:05:02.520
And for the first time, in some cases,
link |
01:05:05.760
coming together with others who shared their passion
link |
01:05:08.680
for AGI and the technical seriousness about working on it.
link |
01:05:13.200
And that's very, very different than what we have today.
link |
01:05:19.160
I mean, now it's a little bit different.
link |
01:05:22.320
We have AGI conference every year
link |
01:05:24.640
and there's several hundred people rather than 50.
link |
01:05:29.300
Now it's more like this is the main gathering
link |
01:05:32.760
of people who want to achieve AGI
link |
01:05:35.020
and think that large scale nonlinear regression
link |
01:05:39.220
is not the golden path to AGI.
link |
01:05:42.480
So I mean it's...
link |
01:05:43.320
AKA neural networks.
link |
01:05:44.160
Yeah, yeah, yeah.
link |
01:05:44.980
Well, certain architectures for learning using neural networks.
link |
01:05:51.840
So yeah, the AGI conferences are sort of now
link |
01:05:54.440
the main concentration of people not obsessed
link |
01:05:57.960
with deep neural nets and deep reinforcement learning,
link |
01:06:00.880
but still interested in AGI, not the only ones.
link |
01:06:06.460
I mean, there's other little conferences and groupings
link |
01:06:10.200
interested in human level AI
link |
01:06:13.280
and cognitive architectures and so forth.
link |
01:06:16.040
But yeah, it's been a big shift.
link |
01:06:17.880
Like back then, you couldn't really...
link |
01:06:21.960
It'll be very, very edgy then
link |
01:06:23.540
to give a university department seminar
link |
01:06:26.220
that mentioned AGI or human level AI.
link |
01:06:28.440
It was more like you had to talk about
link |
01:06:30.640
something more short term and immediately practical
link |
01:06:34.360
than in the bar after the seminar,
link |
01:06:36.600
you could bullshit about AGI in the same breath
link |
01:06:39.540
as time travel or the simulation hypothesis or something.
link |
01:06:44.200
Whereas now, AGI is not only in the academic seminar room,
link |
01:06:48.360
like you have Vladimir Putin knows what AGI is.
link |
01:06:51.960
And he's like, Russia needs to become the leader in AGI.
link |
01:06:55.480
So national leaders and CEOs of large corporations.
link |
01:07:01.080
I mean, the CTO of Intel, Justin Ratner,
link |
01:07:04.240
this was years ago, Singularity Summit Conference,
link |
01:07:06.840
2008 or something.
link |
01:07:07.780
He's like, we believe Ray Kurzweil,
link |
01:07:10.080
the singularity will happen in 2045
link |
01:07:12.000
and it will have Intel inside.
link |
01:07:13.640
So, I mean, it's gone from being something
link |
01:07:18.840
which is the pursuit of like crazed mavericks,
link |
01:07:21.700
crackpots and science fiction fanatics
link |
01:07:24.540
to being a marketing term for large corporations
link |
01:07:30.120
and the national leaders,
link |
01:07:31.480
which is a astounding transition.
link |
01:07:35.160
But yeah, in the course of this transition,
link |
01:07:40.160
I think a bunch of sub communities have formed
link |
01:07:42.260
and the community around the AGI conference series
link |
01:07:45.800
is certainly one of them.
link |
01:07:47.640
It hasn't grown as big as I might've liked it to.
link |
01:07:51.940
On the other hand, sometimes a modest size community
link |
01:07:56.320
can be better for making intellectual progress also.
link |
01:07:59.080
Like you go to a society for neuroscience conference,
link |
01:08:02.160
you have 35 or 40,000 neuroscientists.
link |
01:08:05.400
On the one hand, it's amazing.
link |
01:08:07.480
On the other hand, you're not gonna talk to the leaders
link |
01:08:10.920
of the field there if you're an outsider.
link |
01:08:14.160
Yeah, in the same sense, the AAAI,
link |
01:08:17.920
the artificial intelligence,
link |
01:08:20.160
the main kind of generic artificial intelligence
link |
01:08:23.640
conference is too big.
link |
01:08:26.920
It's too amorphous.
link |
01:08:28.280
Like it doesn't make sense.
link |
01:08:30.240
Well, yeah, and NIPS has become a company advertising outlet
link |
01:08:35.240
in the whole of it.
link |
01:08:37.000
So, I mean, to comment on the role of AGI
link |
01:08:40.240
in the research community, I'd still,
link |
01:08:42.680
if you look at NeurIPS, if you look at CVPR,
link |
01:08:45.200
if you look at these iClear,
link |
01:08:49.240
AGI is still seen as the outcast.
link |
01:08:51.860
I would say in these main machine learning,
link |
01:08:55.020
in these main artificial intelligence conferences
link |
01:08:59.040
amongst the researchers,
link |
01:09:00.880
I don't know if it's an accepted term yet.
link |
01:09:03.880
What I've seen bravely, you mentioned Shane Legg's
link |
01:09:08.280
DeepMind and then OpenAI are the two places that are,
link |
01:09:13.000
I would say unapologetically so far,
link |
01:09:15.580
I think it's actually changing unfortunately,
link |
01:09:17.440
but so far they've been pushing the idea
link |
01:09:19.640
that the goal is to create an AGI.
link |
01:09:22.760
Well, they have billions of dollars behind them.
link |
01:09:24.360
So, I mean, they're in the public mind
link |
01:09:27.220
that certainly carries some oomph, right?
link |
01:09:30.120
I mean, I mean.
link |
01:09:30.960
But they also have really strong researchers, right?
link |
01:09:33.160
They do, they're great teams.
link |
01:09:34.260
I mean, DeepMind in particular, yeah.
link |
01:09:36.660
And they have, I mean, DeepMind has Marcus Hutter
link |
01:09:39.280
walking around.
link |
01:09:40.120
I mean, there's all these folks who basically
link |
01:09:43.480
their full time position involves dreaming
link |
01:09:46.400
about creating AGI.
link |
01:09:47.800
I mean, Google Brain has a lot of amazing
link |
01:09:51.320
AGI oriented people also.
link |
01:09:53.240
And I mean, so I'd say from a public marketing view,
link |
01:09:59.840
DeepMind and OpenAI are the two large well funded
link |
01:10:03.820
organizations that have put the term and concept AGI
link |
01:10:08.360
out there sort of as part of their public image.
link |
01:10:12.720
But I mean, they're certainly not,
link |
01:10:15.200
there are other groups that are doing research
link |
01:10:17.160
that seems just as AGI is to me.
link |
01:10:20.660
I mean, including a bunch of groups in Google's
link |
01:10:23.320
main Mountain View office.
link |
01:10:26.000
So yeah, it's true.
link |
01:10:27.960
AGI is somewhat away from the mainstream now.
link |
01:10:33.880
But if you compare it to where it was 15 years ago,
link |
01:10:38.040
there's been an amazing mainstreaming.
link |
01:10:41.960
You could say the same thing about super longevity research,
link |
01:10:45.520
which is one of my application areas that I'm excited about.
link |
01:10:49.120
I mean, I've been talking about this since the 90s,
link |
01:10:52.880
but working on this since 2001.
link |
01:10:54.560
And back then, really to say,
link |
01:10:57.280
you're trying to create therapies to allow people
link |
01:10:59.440
to live hundreds of thousands of years,
link |
01:11:02.360
you were way, way, way, way out of the industry,
link |
01:11:05.520
academic mainstream.
link |
01:11:06.720
But now, Google had Project Calico,
link |
01:11:11.540
Craig Venter had Human Longevity Incorporated.
link |
01:11:14.080
And then once the suits come marching in, right?
link |
01:11:17.160
I mean, once there's big money in it,
link |
01:11:20.200
then people are forced to take it seriously
link |
01:11:22.720
because that's the way modern society works.
link |
01:11:24.880
So it's still not as mainstream as cancer research,
link |
01:11:28.400
just as AGI is not as mainstream
link |
01:11:31.060
as automated driving or something.
link |
01:11:32.960
But the degree of mainstreaming that's happened
link |
01:11:36.020
in the last 10 to 15 years is astounding
link |
01:11:40.120
to those of us who've been at it for a while.
link |
01:11:42.080
Yeah, but there's a marketing aspect to the term,
link |
01:11:45.360
but in terms of actual full force research
link |
01:11:48.800
that's going on under the header of AGI,
link |
01:11:51.280
it's currently, I would say dominated,
link |
01:11:54.280
maybe you can disagree,
link |
01:11:55.960
dominated by neural networks research,
link |
01:11:57.740
that the nonlinear regression, as you mentioned.
link |
01:12:02.740
Like what's your sense with OpenCog, with your work,
link |
01:12:06.520
but in general, I was logic based systems
link |
01:12:10.920
and expert systems.
link |
01:12:12.000
For me, always seemed to capture a deep element
link |
01:12:18.440
of intelligence that needs to be there.
link |
01:12:21.400
Like you said, it needs to learn,
link |
01:12:23.020
it needs to be automated somehow,
link |
01:12:24.900
but that seems to be missing from a lot of research currently.
link |
01:12:31.360
So what's your sense?
link |
01:12:34.360
I guess one way to ask this question,
link |
01:12:36.280
what's your sense of what kind of things
link |
01:12:39.200
will an AGI system need to have?
link |
01:12:43.480
Yeah, that's a very interesting topic
link |
01:12:45.960
that I've thought about for a long time.
link |
01:12:47.900
And I think there are many, many different approaches
link |
01:12:53.840
that can work for getting to human level AI.
link |
01:12:56.920
So I don't think there's like one golden algorithm,
link |
01:13:02.600
or one golden design that can work.
link |
01:13:05.840
And I mean, flying machines is the much worn
link |
01:13:10.720
analogy here, right?
link |
01:13:11.680
Like, I mean, you have airplanes, you have helicopters,
link |
01:13:13.760
you have balloons, you have stealth bombers
link |
01:13:17.160
that don't look like regular airplanes.
link |
01:13:18.760
You've got all blimps.
link |
01:13:21.040
Birds too.
link |
01:13:21.880
Birds, yeah, and bugs, right?
link |
01:13:24.280
Yeah.
link |
01:13:25.120
And there are certainly many kinds of flying machines that.
link |
01:13:29.920
And there's a catapult that you can just launch.
link |
01:13:32.360
And there's bicycle powered like flying machines, right?
link |
01:13:36.160
Nice, yeah.
link |
01:13:37.000
Yeah, so now these are all analyzable
link |
01:13:40.920
by a basic theory of aerodynamics, right?
link |
01:13:43.800
Now, so one issue with AGI is we don't yet have the analog
link |
01:13:48.920
of the theory of aerodynamics.
link |
01:13:50.800
And that's what Marcus Hutter was trying to make
link |
01:13:54.640
with the AXI and his general theory of general intelligence.
link |
01:13:58.820
But that theory in its most clearly articulated parts
link |
01:14:03.360
really only works for either infinitely powerful machines
link |
01:14:07.120
or almost, or insanely impractically powerful machines.
link |
01:14:11.840
So I mean, if you were gonna take a theory based approach
link |
01:14:14.880
to AGI, what you would do is say, well, let's take
link |
01:14:20.040
what's called say AXE TL, which is Hutter's AXE machine
link |
01:14:25.040
that can work on merely insanely much processing power
link |
01:14:29.000
rather than infinitely much.
link |
01:14:30.200
What does TL stand for?
link |
01:14:32.240
Time and length.
link |
01:14:33.560
Okay.
link |
01:14:34.400
So you're basically how it.
link |
01:14:35.600
Like constrained somehow.
link |
01:14:36.480
Yeah, yeah, yeah.
link |
01:14:37.320
So how AXE works basically is each action
link |
01:14:42.420
that it wants to take, before taking that action,
link |
01:14:45.040
it looks at all its history.
link |
01:14:47.080
And then it looks at all possible programs
link |
01:14:49.880
that it could use to make a decision.
link |
01:14:51.760
And it decides like which decision program
link |
01:14:54.320
would have let it make the best decisions
link |
01:14:56.120
according to its reward function over its history.
link |
01:14:58.400
And it uses that decision program
link |
01:15:00.000
to make the next decision, right?
link |
01:15:02.080
It's not afraid of infinite resources.
link |
01:15:04.760
It's searching through the space
link |
01:15:06.360
of all possible computer programs
link |
01:15:08.440
in between each action and each next action.
link |
01:15:10.720
Now, AXE TL searches through all possible computer programs
link |
01:15:15.320
that have runtime less than T and length less than L.
link |
01:15:18.160
So it's, which is still an impractically humongous space,
link |
01:15:22.680
right?
link |
01:15:23.520
So what you would like to do to make an AGI
link |
01:15:27.960
and what will probably be done 50 years from now
link |
01:15:29.840
to make an AGI is say, okay, well, we have some constraints.
link |
01:15:34.840
We have these processing power constraints
link |
01:15:37.480
and we have the space and time constraints on the program.
link |
01:15:42.700
We have energy utilization constraints
link |
01:15:45.360
and we have this particular class environments,
link |
01:15:48.160
class of environments that we care about,
link |
01:15:50.320
which may be say, you know, manipulating physical objects
link |
01:15:54.400
on the surface of the earth,
link |
01:15:55.400
communicating in human language.
link |
01:15:57.360
I mean, whatever our particular, not annihilating humanity,
link |
01:16:02.240
whatever our particular requirements happen to be.
link |
01:16:05.440
If you formalize those requirements
link |
01:16:07.280
in some formal specification language,
link |
01:16:10.300
you should then be able to run
link |
01:16:13.320
automated program specializer on AXE TL,
link |
01:16:17.040
specialize it to the computing resource constraints
link |
01:16:21.400
and the particular environment and goal.
link |
01:16:23.600
And then it will spit out like the specialized version
link |
01:16:27.600
of AXE TL to your resource restrictions
link |
01:16:30.620
and your environment, which will be your AGI, right?
link |
01:16:32.700
And that I think is how our super AGI
link |
01:16:36.160
will create new AGI systems, right?
link |
01:16:38.560
But that's a very rush.
link |
01:16:40.600
It seems really inefficient.
link |
01:16:41.600
It's a very Russian approach by the way,
link |
01:16:43.160
like the whole field of program specialization
link |
01:16:45.240
came out of Russia.
link |
01:16:47.280
Can you backtrack?
link |
01:16:48.120
So what is program specialization?
link |
01:16:49.680
So it's basically...
link |
01:16:51.120
Well, take sorting, for example.
link |
01:16:53.640
You can have a generic program for sorting lists,
link |
01:16:56.640
but what if all your lists you care about
link |
01:16:58.280
are length 10,000 or less?
link |
01:16:59.920
Got it.
link |
01:17:00.760
You can run an automated program specializer
link |
01:17:02.560
on your sorting algorithm,
link |
01:17:04.080
and it will come up with the algorithm
link |
01:17:05.400
that's optimal for sorting lists of length 1,000 or less,
link |
01:17:08.400
or 10,000 or less, right?
link |
01:17:09.800
That's kind of like, isn't that the kind of the process
link |
01:17:12.200
of evolution as a program specializer to the environment?
link |
01:17:17.440
So you're kind of evolving human beings,
link |
01:17:20.000
or you're living creatures.
link |
01:17:21.840
Your Russian heritage is showing there.
link |
01:17:24.320
So with Alexander Vityaev and Peter Anokhin and so on,
link |
01:17:28.480
I mean, there's a long history
link |
01:17:31.800
of thinking about evolution that way also, right?
link |
01:17:36.760
So, well, my point is that what we're thinking of
link |
01:17:40.120
as a human level general intelligence,
link |
01:17:44.160
if you start from narrow AIs,
link |
01:17:46.680
like are being used in the commercial AI field now,
link |
01:17:50.320
then you're thinking,
link |
01:17:51.440
okay, how do we make it more and more general?
link |
01:17:53.400
On the other hand,
link |
01:17:54.400
if you start from AICSI or Schmidhuber's Gödel machine,
link |
01:17:58.080
or these infinitely powerful,
link |
01:18:01.120
but practically infeasible AIs,
link |
01:18:04.000
then getting to a human level AGI
link |
01:18:06.440
is a matter of specialization.
link |
01:18:08.240
It's like, how do you take these
link |
01:18:10.200
maximally general learning processes
link |
01:18:12.880
and how do you specialize them
link |
01:18:15.760
so that they can operate
link |
01:18:17.600
within the resource constraints that you have,
link |
01:18:20.520
but will achieve the particular things that you care about?
link |
01:18:24.360
Because we humans are not maximally general intelligence.
link |
01:18:28.200
If I ask you to run a maze in 750 dimensions,
link |
01:18:31.400
you'd probably be very slow.
link |
01:18:33.040
Whereas at two dimensions,
link |
01:18:34.600
you're probably way better, right?
link |
01:18:37.080
So, I mean, we're special because our hippocampus
link |
01:18:40.800
has a two dimensional map in it, right?
link |
01:18:43.080
And it does not have a 750 dimensional map in it.
link |
01:18:46.000
So, I mean, we're a peculiar mix
link |
01:18:51.440
of generality and specialization, right?
link |
01:18:56.000
We'll probably start quite general at birth.
link |
01:18:59.200
Not obviously still narrow,
link |
01:19:00.760
but like more general than we are
link |
01:19:03.200
at age 20 and 30 and 40 and 50 and 60.
link |
01:19:07.520
I don't think that, I think it's more complex than that
link |
01:19:10.240
because I mean, in some sense,
link |
01:19:13.800
a young child is less biased
link |
01:19:17.520
and the brain has yet to sort of crystallize
link |
01:19:20.000
into appropriate structures
link |
01:19:22.360
for processing aspects of the physical and social world.
link |
01:19:25.360
On the other hand,
link |
01:19:26.560
the young child is very tied to their sensorium.
link |
01:19:30.120
Whereas we can deal with abstract mathematics,
link |
01:19:33.880
like 750 dimensions and the young child cannot
link |
01:19:37.600
because they haven't grown what Piaget
link |
01:19:40.920
called the formal capabilities.
link |
01:19:44.000
They haven't learned to abstract yet, right?
link |
01:19:46.240
And the ability to abstract
link |
01:19:48.120
gives you a different kind of generality
link |
01:19:49.720
than what the baby has.
link |
01:19:51.680
So, there's both more specialization
link |
01:19:55.400
and more generalization that comes
link |
01:19:57.240
with the development process actually.
link |
01:19:59.760
I mean, I guess just the trajectories
link |
01:20:02.320
of the specialization are most controllable
link |
01:20:06.320
at the young age, I guess is one way to put it.
link |
01:20:09.720
Do you have kids?
link |
01:20:10.720
No.
link |
01:20:11.680
They're not as controllable as you think.
link |
01:20:13.600
So, you think it's interesting.
link |
01:20:15.880
I think, honestly, I think a human adult
link |
01:20:19.040
is much more generally intelligent than a human baby.
link |
01:20:23.240
Babies are very stupid, you know what I mean?
link |
01:20:25.800
I mean, they're cute, which is why we put up
link |
01:20:29.480
with their repetitiveness and stupidity.
link |
01:20:33.080
And they have what the Zen guys would call
link |
01:20:35.040
a beginner's mind, which is a beautiful thing,
link |
01:20:38.200
but that doesn't necessarily correlate
link |
01:20:40.760
with a high level of intelligence.
link |
01:20:43.320
On the plot of cuteness and stupidity,
link |
01:20:46.120
there's a process that allows us to put up
link |
01:20:48.720
with their stupidity as they become more intelligent.
link |
01:20:50.880
So, by the time you're an ugly old man like me,
link |
01:20:52.400
you gotta get really, really smart to compensate.
link |
01:20:54.720
To compensate, okay, cool.
link |
01:20:56.160
But yeah, going back to your original question,
link |
01:20:59.160
so the way I look at human level AGI
link |
01:21:05.280
is how do you specialize, you know,
link |
01:21:08.640
unrealistically inefficient, superhuman,
link |
01:21:12.160
brute force learning processes
link |
01:21:14.600
to the specific goals that humans need to achieve
link |
01:21:18.320
and the specific resources that we have.
link |
01:21:21.920
And both of these, the goals and the resources
link |
01:21:24.600
and the environments, I mean, all this is important.
link |
01:21:27.120
And on the resources side, it's important
link |
01:21:31.320
that the hardware resources we're bringing to bear
link |
01:21:35.600
are very different than the human brain.
link |
01:21:38.240
So the way I would want to implement AGI
link |
01:21:42.680
on a bunch of neurons in a vat
link |
01:21:45.960
that I could rewire arbitrarily is quite different
link |
01:21:48.880
than the way I would want to create AGI
link |
01:21:51.760
on say a modern server farm of CPUs and GPUs,
link |
01:21:55.760
which in turn may be quite different
link |
01:21:57.440
than the way I would want to implement AGI
link |
01:22:00.200
on whatever quantum computer we'll have in 10 years,
link |
01:22:03.760
supposing someone makes a robust quantum turing machine
link |
01:22:06.680
or something, right?
link |
01:22:08.240
So I think there's been coevolution
link |
01:22:12.640
of the patterns of organization in the human brain
link |
01:22:16.960
and the physiological particulars
link |
01:22:19.960
of the human brain over time.
link |
01:22:23.240
And when you look at neural networks,
link |
01:22:25.240
that is one powerful class of learning algorithms,
link |
01:22:28.040
but it's also a class of learning algorithms
link |
01:22:30.040
that evolve to exploit the particulars of the human brain
link |
01:22:33.400
as a computational substrate.
link |
01:22:36.320
If you're looking at the computational substrate
link |
01:22:38.880
of a modern server farm,
link |
01:22:41.040
you won't necessarily want the same algorithms
link |
01:22:43.200
that you want on the human brain.
link |
01:22:45.760
And from the right level of abstraction,
link |
01:22:48.920
you could look at maybe the best algorithms on the brain
link |
01:22:51.760
and the best algorithms on a modern computer network
link |
01:22:54.480
as implementing the same abstract learning
link |
01:22:56.480
and representation processes,
link |
01:22:59.080
but finding that level of abstraction
link |
01:23:01.680
is its own AGI research project then, right?
link |
01:23:04.960
So that's about the hardware side
link |
01:23:07.800
and the software side, which follows from that.
link |
01:23:10.880
Then regarding what are the requirements,
link |
01:23:14.200
I wrote the paper years ago
link |
01:23:16.440
on what I called the embodied communication prior,
link |
01:23:20.360
which was quite similar in intent
link |
01:23:22.960
to Yoshua Bengio's recent paper on the consciousness prior,
link |
01:23:26.760
except I didn't wanna wrap up consciousness in it
link |
01:23:30.440
because to me, the qualia problem and subjective experience
link |
01:23:34.240
is a very interesting issue also,
link |
01:23:35.880
which we can chat about,
link |
01:23:37.880
but I would rather keep that philosophical debate distinct
link |
01:23:43.200
from the debate of what kind of biases
link |
01:23:45.240
do you wanna put in a general intelligence
link |
01:23:47.040
to give it human like general intelligence.
link |
01:23:49.800
And I'm not sure Yoshua Bengio is really addressing
link |
01:23:53.320
that kind of consciousness.
link |
01:23:55.080
He's just using the term.
link |
01:23:56.560
I love Yoshua to pieces.
link |
01:23:58.600
Like he's by far my favorite of the lines of deep learning.
link |
01:24:02.960
Yeah.
link |
01:24:03.800
He's such a good hearted guy.
link |
01:24:05.800
He's a good human being.
link |
01:24:07.000
Yeah, for sure.
link |
01:24:07.840
I am not sure he has plumbed to the depths
link |
01:24:11.200
of the philosophy of consciousness.
link |
01:24:13.520
No, he's using it as a sexy term.
link |
01:24:15.040
Yeah, yeah, yeah.
link |
01:24:15.880
So what I called it was the embodied communication prior.
link |
01:24:21.160
Can you maybe explain it a little bit?
link |
01:24:22.520
Yeah, yeah.
link |
01:24:23.360
What I meant was, what are we humans evolved for?
link |
01:24:26.640
You can say being human, but that's very abstract, right?
link |
01:24:29.720
I mean, our minds control individual bodies,
link |
01:24:32.960
which are autonomous agents moving around in a world
link |
01:24:36.920
that's composed largely of solid objects, right?
link |
01:24:41.280
And we've also evolved to communicate via language
link |
01:24:46.240
with other solid object agents that are going around
link |
01:24:49.960
doing things collectively with us
link |
01:24:52.200
in a world of solid objects.
link |
01:24:54.400
And these things are very obvious,
link |
01:24:56.920
but if you compare them to the scope
link |
01:24:58.400
of all possible intelligences
link |
01:25:01.400
or even all possible intelligences
link |
01:25:03.120
that are physically realizable,
link |
01:25:05.400
that actually constrains things a lot.
link |
01:25:07.400
So if you start to look at how would you realize
link |
01:25:13.000
some specialized or constrained version
link |
01:25:15.880
of universal general intelligence
link |
01:25:18.360
in a system that has limited memory
link |
01:25:21.160
and limited speed of processing,
link |
01:25:23.160
but whose general intelligence will be biased
link |
01:25:26.200
toward controlling a solid object agent,
link |
01:25:28.840
which is mobile in a solid object world
link |
01:25:31.360
for manipulating solid objects
link |
01:25:33.480
and communicating via language with other similar agents
link |
01:25:38.560
in that same world, right?
link |
01:25:39.920
Then starting from that,
link |
01:25:41.560
you're starting to get a requirements analysis
link |
01:25:43.640
for human level general intelligence.
link |
01:25:48.120
And then that leads you into cognitive science
link |
01:25:50.920
and you can look at, say, what are the different types
link |
01:25:53.080
of memory that the human mind and brain has?
link |
01:25:56.960
And this has matured over the last decades
link |
01:26:00.840
and I got into this a lot.
link |
01:26:02.920
So after getting my PhD in math,
link |
01:26:04.600
I was an academic for eight years.
link |
01:26:06.080
I was in departments of mathematics,
link |
01:26:08.720
computer science, and psychology.
link |
01:26:11.320
When I was in the psychology department
link |
01:26:12.760
at the University of Western Australia,
link |
01:26:14.240
I was focused on cognitive science of memory and perception.
link |
01:26:18.720
Actually, I was teaching neural nets and deep neural nets
link |
01:26:21.280
and it was multi layer perceptrons, right?
link |
01:26:23.600
Psychology?
link |
01:26:24.640
Yeah.
link |
01:26:25.800
Cognitive science, it was cross disciplinary
link |
01:26:27.880
among engineering, math, psychology, philosophy,
link |
01:26:31.280
linguistics, computer science.
link |
01:26:33.280
But yeah, we were teaching psychology students
link |
01:26:35.960
to try to model the data from human cognition experiments
link |
01:26:40.040
using multi layer perceptrons,
link |
01:26:42.080
which was the early version of a deep neural network.
link |
01:26:45.040
Very, very, yeah, recurrent back prop
link |
01:26:47.880
was very, very slow to train back then, right?
link |
01:26:51.200
So this is the study of these constraint systems
link |
01:26:53.920
that are supposed to deal with physical objects.
link |
01:26:55.640
So if you look at cognitive psychology,
link |
01:27:01.480
you can see there's multiple types of memory,
link |
01:27:04.520
which are to some extent represented
link |
01:27:06.560
by different subsystems in the human brain.
link |
01:27:08.480
So we have episodic memory,
link |
01:27:10.360
which takes into account our life history
link |
01:27:13.520
and everything that's happened to us.
link |
01:27:15.240
We have declarative or semantic memory,
link |
01:27:17.320
which is like facts and beliefs abstracted
link |
01:27:20.080
from the particular situations that they occurred in.
link |
01:27:22.840
There's sensory memory, which to some extent
link |
01:27:26.120
is sense modality specific,
link |
01:27:27.600
and then to some extent is unified across sense modalities.
link |
01:27:33.360
There's procedural memory, memory of how to do stuff,
link |
01:27:36.120
like how to swing the tennis racket, right?
link |
01:27:38.160
Which is, there's motor memory,
link |
01:27:39.920
but it's also a little more abstract than motor memory.
link |
01:27:43.640
It involves cerebellum and cortex working together.
link |
01:27:47.520
Then there's memory linkage with emotion
link |
01:27:51.600
which has to do with linkages of cortex and limbic system.
link |
01:27:55.920
There's specifics of spatial and temporal modeling
link |
01:27:59.160
connected with memory, which has to do with hippocampus
link |
01:28:02.760
and thalamus connecting to cortex.
link |
01:28:05.360
And the basal ganglia, which influences goals.
link |
01:28:08.160
So we have specific memory of what goals,
link |
01:28:10.960
subgoals and sub subgoals we want to perceive
link |
01:28:13.160
in which context in the past.
link |
01:28:15.040
Human brain has substantially different subsystems
link |
01:28:18.240
for these different types of memory
link |
01:28:21.040
and substantially differently tuned learning,
link |
01:28:24.240
like differently tuned modes of longterm potentiation
link |
01:28:27.280
to do with the types of neurons and neurotransmitters
link |
01:28:29.720
in the different parts of the brain
link |
01:28:31.280
corresponding to these different types of knowledge.
link |
01:28:33.040
And these different types of memory and learning
link |
01:28:35.880
in the human brain, I mean, you can back these all
link |
01:28:38.520
into embodied communication for controlling agents
link |
01:28:41.920
in worlds of solid objects.
link |
01:28:44.680
Now, so if you look at building an AGI system,
link |
01:28:47.720
one way to do it, which starts more from cognitive science
link |
01:28:50.440
than neuroscience is to say,
link |
01:28:52.680
okay, what are the types of memory
link |
01:28:55.240
that are necessary for this kind of world?
link |
01:28:57.360
Yeah, yeah, necessary for this sort of intelligence.
link |
01:29:00.720
What types of learning work well
link |
01:29:02.760
with these different types of memory?
link |
01:29:04.600
And then how do you connect all these things together, right?
link |
01:29:07.800
And of course the human brain did it incrementally
link |
01:29:10.800
through evolution because each of the sub networks
link |
01:29:14.360
of the brain, I mean, it's not really the lobes
link |
01:29:16.680
of the brain, it's the sub networks,
link |
01:29:18.240
each of which is widely distributed,
link |
01:29:20.800
which of the, each of the sub networks of the brain
link |
01:29:23.680
co evolves with the other sub networks of the brain,
link |
01:29:27.160
both in terms of its patterns of organization
link |
01:29:29.480
and the particulars of the neurophysiology.
link |
01:29:31.840
So they all grew up communicating
link |
01:29:33.440
and adapting to each other.
link |
01:29:34.440
It's not like they were separate black boxes
link |
01:29:36.720
that were then glommed together, right?
link |
01:29:40.200
Whereas as engineers, we would tend to say,
link |
01:29:43.320
let's make the declarative memory box here
link |
01:29:46.680
and the procedural memory box here
link |
01:29:48.440
and the perception box here and wire them together.
link |
01:29:51.400
And when you can do that, it's interesting.
link |
01:29:54.120
I mean, that's how a car is built, right?
link |
01:29:55.680
But on the other hand, that's clearly not
link |
01:29:58.560
how biological systems are made.
link |
01:30:01.400
The parts co evolve so as to adapt and work together.
link |
01:30:05.360
That's by the way, how every human engineered system
link |
01:30:09.240
that flies, that was, we were using that analogy
link |
01:30:11.640
before it's built as well.
link |
01:30:13.000
So do you find this at all appealing?
link |
01:30:14.440
Like there's been a lot of really exciting,
link |
01:30:16.680
which I find strange that it's ignored work
link |
01:30:20.160
in cognitive architectures, for example,
link |
01:30:21.880
throughout the last few decades.
link |
01:30:23.320
Do you find that?
link |
01:30:24.320
Yeah, I mean, I had a lot to do with that community
link |
01:30:27.960
and you know, Paul Rosenbloom, who was one of the,
link |
01:30:31.000
and John Laird who built the SOAR architecture,
link |
01:30:33.480
are friends of mine.
link |
01:30:34.640
And I learned SOAR quite well
link |
01:30:37.160
and ACTAR and these different cognitive architectures.
link |
01:30:39.440
And how I was looking at the AI world about 10 years ago
link |
01:30:44.520
before this whole commercial deep learning explosion was,
link |
01:30:47.840
on the one hand, you had these cognitive architecture guys
link |
01:30:51.560
who were working closely with psychologists
link |
01:30:53.480
and cognitive scientists who had thought a lot
link |
01:30:55.760
about how the different parts of a human like mind
link |
01:30:58.840
should work together.
link |
01:31:00.400
On the other hand, you had these learning theory guys
link |
01:31:03.600
who didn't care at all about the architecture,
link |
01:31:06.040
but we're just thinking about like,
link |
01:31:07.360
how do you recognize patterns in large amounts of data?
link |
01:31:10.280
And in some sense, what you needed to do
link |
01:31:14.560
was to get the learning that the learning theory guys
link |
01:31:18.440
were doing and put it together with the architecture
link |
01:31:21.440
that the cognitive architecture guys were doing.
link |
01:31:24.240
And then you would have what you needed.
link |
01:31:25.960
Now, you can't, unfortunately, when you look at the details,
link |
01:31:31.600
you can't just do that without totally rebuilding
link |
01:31:34.960
what is happening on both the cognitive architecture
link |
01:31:37.840
and the learning side.
link |
01:31:38.760
So, I mean, they tried to do that in SOAR,
link |
01:31:41.760
but what they ultimately did is like,
link |
01:31:43.960
take a deep neural net or something for perception
link |
01:31:46.560
and you include it as one of the black boxes.
link |
01:31:50.800
It becomes one of the boxes.
link |
01:31:51.960
The learning mechanism becomes one of the boxes
link |
01:31:53.800
as opposed to fundamental part of the system.
link |
01:31:57.440
You could look at some of the stuff DeepMind has done,
link |
01:32:00.400
like the differential neural computer or something
link |
01:32:03.240
that sort of has a neural net for deep learning perception.
link |
01:32:07.080
It has another neural net, which is like a memory matrix
link |
01:32:10.640
that stores, say, the map of the London subway or something.
link |
01:32:13.080
So probably Demis Tsabas was thinking about this
link |
01:32:16.440
like part of cortex and part of hippocampus
link |
01:32:18.520
because hippocampus has a spatial map.
link |
01:32:20.440
And when he was a neuroscientist,
link |
01:32:21.720
he was doing a bunch on cortex hippocampus interconnection.
link |
01:32:24.600
So there, the DNC would be an example of folks
link |
01:32:27.320
from the deep neural net world trying to take a step
link |
01:32:30.160
in the cognitive architecture direction
link |
01:32:32.200
by having two neural modules that correspond roughly
link |
01:32:35.000
to two different parts of the human brain
link |
01:32:36.720
that deal with different kinds of memory and learning.
link |
01:32:38.920
But on the other hand, it's super, super, super crude
link |
01:32:42.000
from the cognitive architecture view, right?
link |
01:32:44.360
Just as what John Laird and Soar did with neural nets
link |
01:32:48.080
was super, super crude from a learning point of view
link |
01:32:51.200
because the learning was like off to the side,
link |
01:32:53.360
not affecting the core representations, right?
link |
01:32:55.880
I mean, you weren't learning the representation.
link |
01:32:57.880
You were learning the data that feeds into the...
link |
01:33:00.080
You were learning abstractions of perceptual data
link |
01:33:02.600
to feed into the representation that was not learned, right?
link |
01:33:06.560
So yeah, this was clear to me a while ago.
link |
01:33:11.000
And one of my hopes with the AGI community
link |
01:33:14.240
was to sort of bring people
link |
01:33:15.960
from those two directions together.
link |
01:33:19.320
That didn't happen much in terms of...
link |
01:33:21.920
Not yet.
link |
01:33:22.760
And what I was gonna say is it didn't happen
link |
01:33:24.520
in terms of bringing like the lions
link |
01:33:26.360
of cognitive architecture together
link |
01:33:28.560
with the lions of deep learning.
link |
01:33:30.480
It did work in the sense that a bunch of younger researchers
link |
01:33:33.760
have had their heads filled with both of those ideas.
link |
01:33:35.760
This comes back to a saying my dad,
link |
01:33:38.840
who was a university professor, often quoted to me,
link |
01:33:41.360
which was, science advances one funeral at a time,
link |
01:33:45.840
which I'm trying to avoid.
link |
01:33:47.840
Like I'm 53 years old and I'm trying to invent
link |
01:33:51.320
amazing, weird ass new things
link |
01:33:53.480
that nobody ever thought about,
link |
01:33:56.160
which we'll talk about in a few minutes.
link |
01:33:59.240
But there is that aspect, right?
link |
01:34:02.280
Like the people who've been at AI a long time
link |
01:34:05.680
and have made their career developing one aspect,
link |
01:34:08.760
like a cognitive architecture or a deep learning approach,
link |
01:34:12.880
it can be hard once you're old
link |
01:34:14.760
and have made your career doing one thing,
link |
01:34:17.280
it can be hard to mentally shift gears.
link |
01:34:19.640
I mean, I try quite hard to remain flexible minded.
link |
01:34:23.640
Have you been successful somewhat in changing,
link |
01:34:26.480
maybe, have you changed your mind on some aspects
link |
01:34:29.640
of what it takes to build an AGI, like technical things?
link |
01:34:32.920
The hard part is that the world doesn't want you to.
link |
01:34:36.040
The world or your own brain?
link |
01:34:37.360
The world, well, that one point
link |
01:34:39.560
is that your brain doesn't want to.
link |
01:34:41.040
The other part is that the world doesn't want you to.
link |
01:34:43.480
Like the people who have followed your ideas
link |
01:34:46.520
get mad at you if you change your mind.
link |
01:34:49.280
And the media wants to pigeonhole you as an avatar
link |
01:34:54.560
of a certain idea.
link |
01:34:57.160
But yeah, I've changed my mind on a bunch of things.
link |
01:35:01.480
I mean, when I started my career,
link |
01:35:03.800
I really thought quantum computing
link |
01:35:05.240
would be necessary for AGI.
link |
01:35:07.920
And I doubt it's necessary now,
link |
01:35:10.800
although I think it will be a super major enhancement.
link |
01:35:14.680
But I mean, I'm now in the middle of embarking
link |
01:35:19.360
on the complete rethink and rewrite from scratch
link |
01:35:23.400
of our OpenCog AGI system together with Alexey Potapov
link |
01:35:28.480
and his team in St. Petersburg,
link |
01:35:29.840
who's working with me in SingularityNet.
link |
01:35:31.600
So now we're trying to like go back to basics,
link |
01:35:35.680
take everything we learned from working
link |
01:35:37.800
with the current OpenCog system,
link |
01:35:39.600
take everything everybody else has learned
link |
01:35:41.880
from working with their proto AGI systems
link |
01:35:45.680
and design the best framework for the next stage.
link |
01:35:50.000
And I do think there's a lot to be learned
link |
01:35:53.320
from the recent successes with deep neural nets
link |
01:35:56.800
and deep reinforcement systems.
link |
01:35:59.000
I mean, people made these essentially trivial systems
link |
01:36:02.680
work much better than I thought they would.
link |
01:36:04.840
And there's a lot to be learned from that.
link |
01:36:07.080
And I wanna incorporate that knowledge appropriately
link |
01:36:10.720
in our OpenCog 2.0 system.
link |
01:36:13.520
On the other hand, I also think current deep neural net
link |
01:36:18.520
architectures as such will never get you anywhere near AGI.
link |
01:36:22.240
So I think you wanna avoid the pathology
link |
01:36:25.080
of throwing the baby out with the bathwater
link |
01:36:28.360
and like saying, well, these things are garbage
link |
01:36:30.880
because foolish journalists overblow them
link |
01:36:33.840
as being the path to AGI
link |
01:36:37.040
and a few researchers overblow them as well.
link |
01:36:41.600
There's a lot of interesting stuff to be learned there
link |
01:36:45.440
even though those are not the golden path.
link |
01:36:48.000
So maybe this is a good chance to step back.
link |
01:36:50.160
You mentioned OpenCog 2.0, but...
link |
01:36:52.920
Go back to OpenCog 0.0, which exists now.
link |
01:36:56.040
Alpha, yeah.
link |
01:36:58.440
Yeah, maybe talk through the history of OpenCog
link |
01:37:01.920
and your thinking about these ideas.
link |
01:37:03.920
I would say OpenCog 2.0 is a term we're throwing around
link |
01:37:10.120
sort of tongue in cheek because the existing OpenCog system
link |
01:37:14.560
that we're working on now is not remotely close
link |
01:37:17.200
to what we'd consider a 1.0, right?
link |
01:37:20.000
I mean, it's an early...
link |
01:37:23.360
It's been around, what, 13 years or something,
link |
01:37:27.400
but it's still an early stage research system, right?
link |
01:37:29.800
And actually, we are going back to the beginning
link |
01:37:37.360
in terms of theory and implementation
link |
01:37:40.680
because we feel like that's the right thing to do,
link |
01:37:42.840
but I'm sure what we end up with is gonna have
link |
01:37:45.560
a huge amount in common with the current system.
link |
01:37:48.560
I mean, we all still like the general approach.
link |
01:37:51.640
So first of all, what is OpenCog?
link |
01:37:54.400
Sure, OpenCog is an open source software project
link |
01:37:59.800
that I launched together with several others in 2008
link |
01:38:04.400
and probably the first code written toward that
link |
01:38:08.280
was written in 2001 or two or something
link |
01:38:11.160
that was developed as a proprietary code base
link |
01:38:15.320
within my AI company, Novamente LLC.
link |
01:38:18.280
Then we decided to open source it in 2008,
link |
01:38:22.000
cleaned up the code throughout some things
link |
01:38:23.840
and added some new things and...
link |
01:38:26.920
What language is it written in?
link |
01:38:28.080
It's C++.
link |
01:38:29.440
Primarily, there's a bunch of scheme as well,
link |
01:38:31.400
but most of it's C++.
link |
01:38:33.040
And it's separate from something we'll also talk about,
link |
01:38:36.520
the SingularityNet.
link |
01:38:37.480
So it was born as a non networked thing.
link |
01:38:41.360
Correct, correct.
link |
01:38:42.400
Well, there are many levels of networks involved here.
link |
01:38:47.000
No connectivity to the internet, or no, at birth.
link |
01:38:52.000
Yeah, I mean, SingularityNet is a separate project
link |
01:38:57.240
and a separate body of code.
link |
01:38:59.440
And you can use SingularityNet as part of the infrastructure
link |
01:39:02.600
for a distributed OpenCog system,
link |
01:39:04.480
but there are different layers.
link |
01:39:07.520
Yeah, got it.
link |
01:39:08.360
So OpenCog on the one hand as a software framework
link |
01:39:14.840
could be used to implement a variety
link |
01:39:17.000
of different AI architectures and algorithms,
link |
01:39:21.840
but in practice, there's been a group of developers
link |
01:39:26.440
which I've been leading together with Linus Vepstas,
link |
01:39:29.440
Neil Geisweiler, and a few others,
link |
01:39:31.680
which have been using the OpenCog platform
link |
01:39:35.080
and infrastructure to implement certain ideas
link |
01:39:39.440
about how to make an AGI.
link |
01:39:41.280
So there's been a little bit of ambiguity
link |
01:39:43.480
about OpenCog, the software platform
link |
01:39:46.120
versus OpenCog, the AGI design,
link |
01:39:49.360
because in theory, you could use that software to do,
link |
01:39:52.160
you could use it to make a neural net.
link |
01:39:53.440
You could use it to make a lot of different AGI.
link |
01:39:55.880
What kind of stuff does the software platform provide,
link |
01:39:58.640
like in terms of utilities, tools, like what?
link |
01:40:00.760
Yeah, let me first tell about OpenCog
link |
01:40:03.840
as a software platform,
link |
01:40:05.520
and then I'll tell you the specific AGI R&D
link |
01:40:08.680
we've been building on top of it.
link |
01:40:12.240
So the core component of OpenCog as a software platform
link |
01:40:16.200
is what we call the atom space,
link |
01:40:17.920
which is a weighted labeled hypergraph.
link |
01:40:21.240
ATOM, atom space.
link |
01:40:22.880
Atom space, yeah, yeah, not atom, like Adam and Eve,
link |
01:40:25.880
although that would be cool too.
link |
01:40:28.080
Yeah, so you have a hypergraph, which is like,
link |
01:40:32.120
so a graph in this sense is a bunch of nodes
link |
01:40:35.360
with links between them.
link |
01:40:37.120
A hypergraph is like a graph,
link |
01:40:40.960
but links can go between more than two nodes.
link |
01:40:43.960
So you have a link between three nodes.
link |
01:40:45.520
And in fact, OpenCog's atom space
link |
01:40:49.560
would properly be called a metagraph
link |
01:40:51.760
because you can have links pointing to links,
link |
01:40:54.080
or you could have links pointing to whole subgraphs, right?
link |
01:40:56.840
So it's an extended hypergraph or a metagraph.
link |
01:41:00.920
Is metagraph a technical term?
link |
01:41:02.280
It is now a technical term.
link |
01:41:03.640
Interesting.
link |
01:41:04.480
But I don't think it was yet a technical term
link |
01:41:06.360
when we started calling this a generalized hypergraph.
link |
01:41:10.080
But in any case, it's a weighted labeled
link |
01:41:13.400
generalized hypergraph or weighted labeled metagraph.
link |
01:41:16.920
The weights and labels mean that the nodes and links
link |
01:41:19.200
can have numbers and symbols attached to them.
link |
01:41:22.360
So they can have types on them.
link |
01:41:24.920
They can have numbers on them that represent,
link |
01:41:27.440
say, a truth value or an importance value
link |
01:41:30.120
for a certain purpose.
link |
01:41:32.000
And of course, like with all things,
link |
01:41:33.240
you can reduce that to a hypergraph,
link |
01:41:35.080
and then the hypergraph can be reduced to a graph.
link |
01:41:35.920
You can reduce hypergraph to a graph,
link |
01:41:37.680
and you could reduce a graph to an adjacency matrix.
link |
01:41:39.880
So, I mean, there's always multiple representations.
link |
01:41:42.720
But there's a layer of representation
link |
01:41:44.000
that seems to work well here.
link |
01:41:45.120
Got it.
link |
01:41:45.960
Right, right, right.
link |
01:41:46.800
And so similarly, you could have a link to a whole graph
link |
01:41:52.080
because a whole graph could represent,
link |
01:41:53.440
say, a body of information.
link |
01:41:54.920
And I could say, I reject this body of information.
link |
01:41:58.640
Then one way to do that is make that link
link |
01:42:00.320
go to that whole subgraph representing
link |
01:42:02.000
the body of information, right?
link |
01:42:04.040
I mean, there are many alternate representations,
link |
01:42:07.200
but that's, anyway, what we have in OpenCOG,
link |
01:42:10.720
we have an atom space, which is this weighted, labeled,
link |
01:42:13.160
generalized hypergraph.
link |
01:42:15.080
Knowledge store, it lives in RAM.
link |
01:42:17.840
There's also a way to back it up to disk.
link |
01:42:20.120
There are ways to spread it among
link |
01:42:22.320
multiple different machines.
link |
01:42:24.120
Then there are various utilities for dealing with that.
link |
01:42:27.960
So there's a pattern matcher,
link |
01:42:29.800
which lets you specify a sort of abstract pattern
link |
01:42:33.880
and then search through a whole atom space
link |
01:42:36.200
with labeled hypergraph to see what subhypergraphs
link |
01:42:39.800
may match that pattern, for an example.
link |
01:42:42.880
So that's, then there's something called
link |
01:42:45.920
the COG server in OpenCOG,
link |
01:42:48.760
which lets you run a bunch of different agents
link |
01:42:52.560
or processes in a scheduler.
link |
01:42:55.880
And each of these agents, basically it reads stuff
link |
01:42:59.160
from the atom space and it writes stuff to the atom space.
link |
01:43:01.880
So this is sort of the basic operational model.
link |
01:43:05.640
That's the software framework.
link |
01:43:07.760
And of course that's, there's a lot there
link |
01:43:10.360
just from a scalable software engineering standpoint.
link |
01:43:13.200
So you could use this, I don't know if you've,
link |
01:43:15.080
have you looked into the Stephen Wolfram's physics project
link |
01:43:18.000
recently with the hypergraphs and stuff?
link |
01:43:20.160
Could you theoretically use like the software framework
link |
01:43:22.840
to play with it? You certainly could,
link |
01:43:23.800
although Wolfram would rather die
link |
01:43:26.160
than use anything but Mathematica for his work.
link |
01:43:29.080
Well that's, yeah, but there's a big community of people
link |
01:43:32.120
who are, you know, would love integration.
link |
01:43:36.080
Like you said, the young minds love the idea
link |
01:43:38.400
of integrating, of connecting things.
link |
01:43:40.440
Yeah, that's right.
link |
01:43:41.280
And I would add on that note,
link |
01:43:42.840
the idea of using hypergraph type models in physics
link |
01:43:46.600
is not very new.
link |
01:43:47.680
Like if you look at...
link |
01:43:49.120
The Russians did it first.
link |
01:43:50.360
Well, I'm sure they did.
link |
01:43:52.200
And a guy named Ben Dribis, who's a mathematician,
link |
01:43:55.880
a professor in Louisiana or somewhere,
link |
01:43:58.200
had a beautiful book on quantum sets and hypergraphs
link |
01:44:01.960
and algebraic topology for discrete models of physics.
link |
01:44:05.520
And carried it much farther than Wolfram has,
link |
01:44:09.080
but he's not rich and famous,
link |
01:44:10.920
so it didn't get in the headlines.
link |
01:44:13.280
But yeah, Wolfram aside, yeah,
link |
01:44:15.280
certainly that's a good way to put it.
link |
01:44:17.120
The whole OpenCog framework,
link |
01:44:19.280
you could use it to model biological networks
link |
01:44:22.200
and simulate biology processes.
link |
01:44:24.200
You could use it to model physics
link |
01:44:26.480
on discrete graph models of physics.
link |
01:44:30.160
So you could use it to do, say, biologically realistic
link |
01:44:36.840
neural networks, for example.
link |
01:44:39.280
And that's a framework.
link |
01:44:42.360
What do agents and processes do?
link |
01:44:44.240
Do they grow the graph?
link |
01:44:45.880
What kind of computations, just to get a sense,
link |
01:44:48.200
are they supposed to do?
link |
01:44:49.040
So in theory, they could do anything they want to do.
link |
01:44:51.200
They're just C++ processes.
link |
01:44:53.320
On the other hand, the computation framework
link |
01:44:56.880
is sort of designed for agents
link |
01:44:59.160
where most of their processing time
link |
01:45:02.000
is taken up with reads and writes to the atom space.
link |
01:45:05.400
And so that's a very different processing model
link |
01:45:09.000
than, say, the matrix multiplication based model
link |
01:45:12.440
as underlies most deep learning systems, right?
link |
01:45:15.080
So you could create an agent
link |
01:45:19.560
that just factored numbers for a billion years.
link |
01:45:22.720
It would run within the OpenCog platform,
link |
01:45:25.000
but it would be pointless, right?
link |
01:45:26.600
I mean, the point of doing OpenCog
link |
01:45:28.880
is because you want to make agents
link |
01:45:30.520
that are cooperating via reading and writing
link |
01:45:33.160
into this weighted labeled hypergraph, right?
link |
01:45:36.400
And that has both cognitive architecture importance
link |
01:45:41.560
because then this hypergraph is being used
link |
01:45:43.400
as a sort of shared memory
link |
01:45:46.040
among different cognitive processes,
link |
01:45:48.240
but it also has software and hardware
link |
01:45:51.000
implementation implications
link |
01:45:52.840
because current GPU architectures
link |
01:45:54.840
are not so useful for OpenCog,
link |
01:45:57.120
whereas a graph chip would be incredibly useful, right?
link |
01:46:01.200
And I think Graphcore has those now,
link |
01:46:03.640
but they're not ideally suited for this.
link |
01:46:05.240
But I think in the next, let's say, three to five years,
link |
01:46:10.640
we're gonna see new chips
link |
01:46:12.000
where like a graph is put on the chip
link |
01:46:14.680
and the back and forth between multiple processes
link |
01:46:19.320
acting SIMD and MIMD on that graph is gonna be fast.
link |
01:46:23.600
And then that may do for OpenCog type architectures
link |
01:46:26.480
what GPUs did for deep neural architecture.
link |
01:46:29.840
It's a small tangent.
link |
01:46:31.320
Can you comment on thoughts about neuromorphic computing?
link |
01:46:34.600
So like hardware implementations
link |
01:46:36.400
of all these different kind of, are you interested?
link |
01:46:39.360
Are you excited by that possibility?
link |
01:46:41.000
I'm excited by graph processors
link |
01:46:42.680
because I think they can massively speed up OpenCog,
link |
01:46:46.440
which is a class of architectures that I'm working on.
link |
01:46:50.680
I think if, you know, in principle, neuromorphic computing
link |
01:46:57.240
should be amazing.
link |
01:46:58.760
I haven't yet been fully sold
link |
01:47:00.480
on any of the systems that are out.
link |
01:47:03.320
They're like, memristors should be amazing too, right?
link |
01:47:06.400
So a lot of these things have obvious potential,
link |
01:47:09.400
but I haven't yet put my hands on a system
link |
01:47:11.360
that seemed to manifest that.
link |
01:47:13.280
Mark's system should be amazing,
link |
01:47:14.880
but the current systems have not been great.
link |
01:47:17.880
Yeah, I mean, look, for example,
link |
01:47:19.640
if you wanted to make a biologically realistic
link |
01:47:23.960
hardware neural network,
link |
01:47:25.680
like making a circuit in hardware
link |
01:47:31.520
that emulated like the Hodgkin–Huxley equation
link |
01:47:34.360
or the Izhekevich equation,
link |
01:47:35.640
like differential equations
link |
01:47:38.240
for a biologically realistic neuron
link |
01:47:40.680
and putting that in hardware on the chip,
link |
01:47:43.800
that would seem that it would make more feasible
link |
01:47:46.360
to make a large scale, truly biologically realistic
link |
01:47:50.320
neural network.
link |
01:47:51.160
Now, what's been done so far is not like that.
link |
01:47:54.320
So I guess personally, as a researcher,
link |
01:47:57.120
I mean, I've done a bunch of work in computational neuroscience
link |
01:48:02.480
where I did some work with IARPA in DC,
link |
01:48:05.600
Intelligence Advanced Research Project Agency.
link |
01:48:08.240
We were looking at how do you make
link |
01:48:10.880
a biologically realistic simulation
link |
01:48:13.000
of seven different parts of the brain
link |
01:48:15.720
cooperating with each other,
link |
01:48:17.080
using like realistic nonlinear dynamical models of neurons,
link |
01:48:20.440
and how do you get that to simulate
link |
01:48:21.920
what's going on in the mind of a geo intelligence analyst
link |
01:48:24.800
while they're trying to find terrorists on a map, right?
link |
01:48:27.160
So if you want to do something like that,
link |
01:48:29.880
having neuromorphic hardware that really let you simulate
link |
01:48:34.080
like a realistic model of the neuron would be amazing.
link |
01:48:38.720
But that's sort of with my computational neuroscience
link |
01:48:42.280
hat on, right?
link |
01:48:43.120
With an AGI hat on, I'm just more interested
link |
01:48:47.160
in these hypergraph knowledge representation
link |
01:48:50.200
based architectures, which would benefit more
link |
01:48:54.480
from various types of graph processors
link |
01:48:57.720
because the main processing bottleneck
link |
01:49:00.480
is reading writing to RAM.
link |
01:49:02.000
It's reading writing to the graph in RAM.
link |
01:49:03.960
The main processing bottleneck for this kind of
link |
01:49:06.120
proto AGI architecture is not multiplying matrices.
link |
01:49:09.840
And for that reason, GPUs, which are really good
link |
01:49:13.280
at multiplying matrices, don't apply as well.
link |
01:49:17.520
There are frameworks like Gunrock and others
link |
01:49:20.240
that try to boil down graph processing
link |
01:49:22.160
to matrix operations, and they're cool,
link |
01:49:24.640
but you're still putting a square peg
link |
01:49:26.160
into a round hole in a certain way.
link |
01:49:28.800
The same is true, I mean, current quantum machine learning,
link |
01:49:32.760
which is very cool.
link |
01:49:34.240
It's also all about how to get matrix and vector operations
link |
01:49:37.320
in quantum mechanics, and I see why that's natural to do.
link |
01:49:41.280
I mean, quantum mechanics is all unitary matrices
link |
01:49:44.240
and vectors, right?
link |
01:49:45.800
On the other hand, you could also try
link |
01:49:48.040
to make graph centric quantum computers,
link |
01:49:50.760
which I think is where things will go.
link |
01:49:54.400
And then we can have, then we can make,
link |
01:49:57.080
like take the open cog implementation layer,
link |
01:50:00.120
implement it in a collapsed state inside a quantum computer.
link |
01:50:04.000
But that may be the singularity squared, right?
link |
01:50:06.480
I'm not sure we need that to get to human level.
link |
01:50:12.360
That's already beyond the first singularity.
link |
01:50:14.680
But can we just go back to open cog?
link |
01:50:17.640
Yeah, and the hypergraph and open cog.
link |
01:50:20.040
That's the software framework, right?
link |
01:50:21.640
So the next thing is our cognitive architecture
link |
01:50:25.440
tells us particular algorithms to put there.
link |
01:50:27.960
Got it.
link |
01:50:28.800
Can we backtrack on the kind of, is this graph designed,
link |
01:50:33.720
is it in general supposed to be sparse
link |
01:50:37.680
and the operations constantly grow and change the graph?
link |
01:50:40.640
Yeah, the graph is sparse.
link |
01:50:42.320
But is it constantly adding links and so on?
link |
01:50:45.040
It is a self modifying hypergraph.
link |
01:50:47.200
So it's not, so the write and read operations
link |
01:50:49.800
you're referring to, this isn't just a fixed graph
link |
01:50:53.040
to which you change the way, it's a constantly growing graph.
link |
01:50:55.840
Yeah, that's true.
link |
01:50:58.000
So it is different model than,
link |
01:51:03.000
say current deep neural nets
link |
01:51:04.680
and have a fixed neural architecture
link |
01:51:06.840
and you're updating the weights.
link |
01:51:08.600
Although there have been like cascade correlational
link |
01:51:10.880
neural net architectures that grow new nodes and links,
link |
01:51:13.920
but the most common neural architectures now
link |
01:51:16.640
have a fixed neural architecture,
link |
01:51:17.960
you're updating the weights.
link |
01:51:19.080
And then open cog, you can update the weights
link |
01:51:22.520
and that certainly happens a lot,
link |
01:51:24.760
but adding new nodes, adding new links,
link |
01:51:28.200
removing nodes and links is an equally critical part
link |
01:51:30.720
of the system's operations.
link |
01:51:32.160
Got it.
link |
01:51:33.000
So now when you start to add these cognitive algorithms
link |
01:51:37.040
on top of this open cog architecture,
link |
01:51:39.840
what does that look like?
link |
01:51:41.280
Yeah, so within this framework then,
link |
01:51:44.800
creating a cognitive architecture is basically two things.
link |
01:51:48.040
It's choosing what type system you wanna put
link |
01:51:52.080
on the nodes and links in the hypergraph,
link |
01:51:53.800
what types of nodes and links you want.
link |
01:51:56.120
And then it's choosing what collection of agents,
link |
01:52:01.000
what collection of AI algorithms or processes
link |
01:52:04.640
are gonna run to operate on this hypergraph.
link |
01:52:08.040
And of course those two decisions
link |
01:52:10.520
are closely connected to each other.
link |
01:52:13.920
So in terms of the type system,
link |
01:52:17.480
there are some links that are more neural net like,
link |
01:52:19.920
they're just like have weights to get updated
link |
01:52:22.360
by heavy and learning and activation spreads along them.
link |
01:52:26.000
There are other links that are more logic like
link |
01:52:29.080
and nodes that are more logic like.
link |
01:52:30.520
So you could have a variable node
link |
01:52:32.240
and you can have a node representing a universal
link |
01:52:34.240
or existential quantifier as in predicate logic
link |
01:52:37.680
or term logic.
link |
01:52:39.160
So you can have logic like nodes and links,
link |
01:52:42.080
or you can have neural like nodes and links.
link |
01:52:44.440
You can also have procedure like nodes and links
link |
01:52:47.400
as in say a combinatorial logic or Lambda calculus
link |
01:52:51.960
representing programs.
link |
01:52:53.680
So you can have nodes and links representing
link |
01:52:56.520
many different types of semantics,
link |
01:52:58.640
which means you could make a horrible ugly mess
link |
01:53:00.840
or you could make a system
link |
01:53:02.800
where these different types of knowledge
link |
01:53:04.280
all interpenetrate and synergize
link |
01:53:06.840
with each other beautifully, right?
link |
01:53:08.960
So the hypergraph can contain programs.
link |
01:53:12.800
Yeah, it can contain programs,
link |
01:53:14.440
although in the current version,
link |
01:53:17.960
it is a very inefficient way
link |
01:53:19.760
to guide the execution of programs,
link |
01:53:21.960
which is one thing that we are aiming to resolve
link |
01:53:25.000
with our rewrite of the system now.
link |
01:53:27.520
So what to you is the most beautiful aspect of OpenCog?
link |
01:53:32.720
Just to you personally,
link |
01:53:34.600
some aspect that captivates your imagination
link |
01:53:38.080
from beauty or power?
link |
01:53:42.000
What fascinates me is finding a common representation
link |
01:53:48.320
that underlies abstract, declarative knowledge
link |
01:53:53.320
and sensory knowledge and movement knowledge
link |
01:53:57.320
and procedural knowledge and episodic knowledge,
link |
01:54:00.760
finding the right level of representation
link |
01:54:03.960
where all these types of knowledge are stored
link |
01:54:06.560
in a sort of universal and interconvertible
link |
01:54:10.560
yet practically manipulable way, right?
link |
01:54:13.440
So to me, that's the core,
link |
01:54:16.840
because once you've done that,
link |
01:54:18.640
then the different learning algorithms
link |
01:54:20.800
can help each other out. Like what you want is,
link |
01:54:23.640
if you have a logic engine
link |
01:54:25.120
that helps with declarative knowledge
link |
01:54:26.840
and you have a deep neural net
link |
01:54:28.040
that gathers perceptual knowledge,
link |
01:54:29.960
and you have, say, an evolutionary learning system
link |
01:54:32.400
that learns procedures,
link |
01:54:34.120
you want these to not only interact
link |
01:54:36.600
on the level of sharing results
link |
01:54:38.880
and passing inputs and outputs to each other,
link |
01:54:41.120
you want the logic engine, when it gets stuck,
link |
01:54:43.680
to be able to share its intermediate state
link |
01:54:46.240
with the neural net and with the evolutionary system
link |
01:54:49.360
and with the evolutionary learning algorithm
link |
01:54:52.240
so that they can help each other out of bottlenecks
link |
01:54:55.440
and help each other solve combinatorial explosions
link |
01:54:58.320
by intervening inside each other's cognitive processes.
link |
01:55:02.040
But that can only be done
link |
01:55:03.520
if the intermediate state of a logic engine,
link |
01:55:05.960
the evolutionary learning engine,
link |
01:55:07.400
and a deep neural net are represented in the same form.
link |
01:55:11.120
And that's what we figured out how to do
link |
01:55:13.120
by putting the right type system
link |
01:55:14.800
on top of this weighted labeled hypergraph.
link |
01:55:17.040
So is there, can you maybe elaborate
link |
01:55:19.680
on what are the different characteristics
link |
01:55:21.880
of a type system that can coexist
link |
01:55:26.520
amongst all these different kinds of knowledge
link |
01:55:28.760
that needs to be represented?
link |
01:55:30.080
And is, I mean, like, is it hierarchical?
link |
01:55:34.280
Just any kind of insights you can give
link |
01:55:36.720
on that kind of type system?
link |
01:55:37.840
Yeah, yeah, so this gets very nitty gritty
link |
01:55:41.680
and mathematical, of course,
link |
01:55:44.000
but one key part is switching
link |
01:55:47.200
from predicate logic to term logic.
link |
01:55:50.440
What is predicate logic?
link |
01:55:51.640
What is term logic?
link |
01:55:53.200
So term logic was invented by Aristotle,
link |
01:55:56.080
or at least that's the oldest recollection we have of it.
link |
01:56:01.320
But term logic breaks down basic logic
link |
01:56:05.280
into basically simple links between nodes,
link |
01:56:07.480
like an inheritance link between node A and node B.
link |
01:56:12.480
So in term logic, the basic deduction operation
link |
01:56:16.280
is A implies B, B implies C, therefore A implies C.
link |
01:56:21.080
Whereas in predicate logic,
link |
01:56:22.600
the basic operation is modus ponens,
link |
01:56:24.520
like A implies B, therefore B.
link |
01:56:27.680
So it's a slightly different way of breaking down logic,
link |
01:56:31.440
but by breaking down logic into term logic,
link |
01:56:35.320
you get a nice way of breaking logic down
link |
01:56:37.440
into nodes and links.
link |
01:56:40.120
So your concepts can become nodes,
link |
01:56:42.960
the logical relations become links.
link |
01:56:45.200
And so then inference is like,
link |
01:56:46.640
so if this link is A implies B,
link |
01:56:48.720
this link is B implies C,
link |
01:56:50.840
then deduction builds a link A implies C.
link |
01:56:53.360
And your probabilistic algorithm
link |
01:56:54.920
can assign a certain weight there.
link |
01:56:57.440
Now, you may also have like a Hebbian neural link
link |
01:57:00.040
from A to C, which is the degree to which thinking,
link |
01:57:03.600
the degree to which A being the focus of attention
link |
01:57:06.640
should make B the focus of attention, right?
link |
01:57:09.080
So you could have then a neural link
link |
01:57:10.880
and you could have a symbolic,
link |
01:57:13.720
like logical inheritance link in your term logic.
link |
01:57:17.000
And they have separate meaning,
link |
01:57:19.520
but they could be used to guide each other as well.
link |
01:57:22.960
Like if there's a large amount of neural weight
link |
01:57:26.720
on the link between A and B,
link |
01:57:28.400
that may direct your logic engine to think about,
link |
01:57:30.440
well, what is the relation?
link |
01:57:31.320
Are they similar?
link |
01:57:32.160
Is there an inheritance relation?
link |
01:57:33.880
Are they similar in some context?
link |
01:57:37.400
On the other hand, if there's a logical relation
link |
01:57:39.920
between A and B, that may direct your neural component
link |
01:57:43.360
to think, well, when I'm thinking about A,
link |
01:57:45.520
should I be directing some attention to B also?
link |
01:57:48.240
Because there's a logical relation.
link |
01:57:50.160
So in terms of logic,
link |
01:57:53.000
there's a lot of thought that went into
link |
01:57:54.320
how do you break down logic relations,
link |
01:57:58.280
including basic sort of propositional logic relations
link |
01:58:02.320
as Aristotelian term logic deals with,
link |
01:58:04.160
and then quantifier logic relations also.
link |
01:58:07.080
How do you break those down elegantly into a hypergraph?
link |
01:58:10.920
Because you, I mean, you can boil logic expression
link |
01:58:13.480
into a graph in many different ways.
link |
01:58:14.840
Many of them are very ugly, right?
link |
01:58:16.680
We tried to find elegant ways
link |
01:58:19.200
of sort of hierarchically breaking down
link |
01:58:22.600
complex logic expression into nodes and links.
link |
01:58:26.880
So that if you have say different nodes representing,
link |
01:58:31.400
Ben, AI, Lex, interview or whatever,
link |
01:58:34.200
the logic relations between those things
link |
01:58:36.800
are compact in the node and link representation.
link |
01:58:40.480
So that when you have a neural net acting
link |
01:58:42.080
on the same nodes and links,
link |
01:58:43.960
the neural net and the logic engine
link |
01:58:45.760
can sort of interoperate with each other.
link |
01:58:48.240
And also interpretable by humans.
link |
01:58:49.920
Is that an important?
link |
01:58:51.400
That's tough.
link |
01:58:52.240
Yeah, in simple cases, it's interpretable by humans.
link |
01:58:54.600
But honestly, I would say logic systems
link |
01:58:59.600
I would say logic systems give more potential
link |
01:59:05.440
for transparency and comprehensibility
link |
01:59:09.800
than neural net systems,
link |
01:59:11.640
but you still have to work at it.
link |
01:59:12.840
Because I mean, if I show you a predicate logic proposition
link |
01:59:16.680
with like 500 nested universal and existential quantifiers
link |
01:59:20.080
and 217 variables, that's no more comprehensible
link |
01:59:23.680
than the weight metrics of a neural network, right?
link |
01:59:26.560
So I'd say the logic expressions
link |
01:59:28.560
that AI learns from its experience
link |
01:59:30.920
are mostly totally opaque to human beings
link |
01:59:33.440
and maybe even harder to understand than neural net.
link |
01:59:36.200
Because I mean, when you have multiple
link |
01:59:37.440
nested quantifier bindings,
link |
01:59:38.960
it's a very high level of abstraction.
link |
01:59:41.520
There is a difference though,
link |
01:59:42.680
in that within logic, it's a little more straightforward
link |
01:59:46.880
to pose the problem of like normalize this
link |
01:59:49.120
and boil this down to a certain form.
link |
01:59:51.080
I mean, you can do that in neural nets too.
link |
01:59:52.720
Like you can distill a neural net to a simpler form,
link |
01:59:55.680
but that's more often done to make a neural net
link |
01:59:57.280
that'll run on an embedded device or something.
link |
01:59:59.720
It's harder to distill a net to a comprehensible form
link |
02:00:03.440
than it is to simplify a logic expression
link |
02:00:05.640
to a comprehensible form, but it doesn't come for free.
link |
02:00:08.600
Like what's in the AI's mind is incomprehensible
link |
02:00:13.040
to a human unless you do some special work
link |
02:00:15.720
to make it comprehensible.
link |
02:00:16.880
So on the procedural side, there's some different
link |
02:00:20.400
and sort of interesting voodoo there.
link |
02:00:23.000
I mean, if you're familiar in computer science,
link |
02:00:25.800
there's something called the Curry Howard correspondence,
link |
02:00:27.800
which is a one to one mapping between proofs and programs.
link |
02:00:30.920
So every program can be mapped into a proof.
link |
02:00:33.520
Every proof can be mapped into a program.
link |
02:00:35.960
You can model this using category theory
link |
02:00:37.800
and a bunch of nice math,
link |
02:00:40.960
but we wanna make that practical, right?
link |
02:00:43.280
So that if you have an executable program
link |
02:00:46.520
that like moves the robot's arm or figures out
link |
02:00:49.960
in what order to say things in a dialogue,
link |
02:00:51.840
that's a procedure represented in OpenCog's hypergraph.
link |
02:00:55.840
But if you wanna reason on how to improve that procedure,
link |
02:01:00.120
you need to map that procedure into logic
link |
02:01:03.080
using Curry Howard isomorphism.
link |
02:01:05.520
So then the logic engine can reason
link |
02:01:09.320
about how to improve that procedure
link |
02:01:11.120
and then map that back into the procedural representation
link |
02:01:14.080
that is efficient for execution.
link |
02:01:16.160
So again, that comes down to not just
link |
02:01:18.800
can you make your procedure into a bunch of nodes and links?
link |
02:01:21.440
Cause I mean, that can be done trivially.
link |
02:01:23.280
A C++ compiler has nodes and links inside it.
link |
02:01:26.440
Can you boil down your procedure
link |
02:01:27.960
into a bunch of nodes and links
link |
02:01:29.840
in a way that's like hierarchically decomposed
link |
02:01:32.560
and simple enough?
link |
02:01:33.680
It can reason about.
link |
02:01:34.520
Yeah, yeah, that given the resource constraints at hand,
link |
02:01:37.040
you can map it back and forth to your term logic,
link |
02:01:40.920
like fast enough
link |
02:01:42.080
and without having a bloated logic expression, right?
link |
02:01:45.200
So there's just a lot of,
link |
02:01:48.320
there's a lot of nitty gritty particulars there,
link |
02:01:50.360
but by the same token, if you ask a chip designer,
link |
02:01:54.520
like how do you make the Intel I7 chip so good?
link |
02:01:58.560
There's a long list of technical answers there,
link |
02:02:02.560
which will take a while to go through, right?
link |
02:02:04.800
And this has been decades of work.
link |
02:02:06.640
I mean, the first AI system of this nature I tried to build
link |
02:02:10.880
was called WebMind in the mid 1990s.
link |
02:02:13.440
And we had a big graph,
link |
02:02:15.600
a big graph operating in RAM implemented with Java 1.1,
link |
02:02:18.880
which was a terrible, terrible implementation idea.
link |
02:02:21.800
And then each node had its own processing.
link |
02:02:25.960
So like that there,
link |
02:02:27.440
the core loop looped through all nodes in the network
link |
02:02:29.560
and let each node enact what its little thing was doing.
link |
02:02:32.920
And we had logic and neural nets in there,
link |
02:02:35.880
but an evolutionary learning,
link |
02:02:38.400
but we hadn't done enough of the math
link |
02:02:40.760
to get them to operate together very cleanly.
link |
02:02:43.400
So it was really, it was quite a horrible mess.
link |
02:02:46.240
So as well as shifting an implementation
link |
02:02:49.400
where the graph is its own object
link |
02:02:51.840
and the agents are separately scheduled,
link |
02:02:54.720
we've also done a lot of work
link |
02:02:56.800
on how do you represent programs?
link |
02:02:58.400
How do you represent procedures?
link |
02:03:00.800
You know, how do you represent genotypes for evolution
link |
02:03:03.640
in a way that the interoperability
link |
02:03:06.640
between the different types of learning
link |
02:03:09.000
associated with these different types of knowledge
link |
02:03:11.720
actually works?
link |
02:03:13.040
And that's been quite difficult.
link |
02:03:14.960
It's taken decades and it's totally off to the side
link |
02:03:18.600
of what the commercial mainstream of the AI field is doing,
link |
02:03:23.080
which isn't thinking about representation at all really.
link |
02:03:27.640
Although you could see like in the DNC,
link |
02:03:30.800
they had to think a little bit about
link |
02:03:32.320
how do you make representation of a map
link |
02:03:33.880
in this memory matrix work together
link |
02:03:36.680
with the representation needed
link |
02:03:38.160
for say visual pattern recognition
link |
02:03:40.240
in the hierarchical neural network.
link |
02:03:42.120
But I would say we have taken that direction
link |
02:03:45.120
of taking the types of knowledge you need
link |
02:03:47.920
for different types of learning,
link |
02:03:49.120
like declarative, procedural, attentional,
link |
02:03:52.040
and how do you make these types of knowledge represent
link |
02:03:55.520
in a way that allows cross learning
link |
02:03:58.160
across these different types of memory.
link |
02:04:00.200
We've been prototyping and experimenting with this
link |
02:04:03.920
within OpenCog and before that WebMind
link |
02:04:07.560
since the mid 1990s.
link |
02:04:10.640
Now, disappointingly to all of us,
link |
02:04:13.840
this has not yet been cashed out in an AGI system, right?
link |
02:04:18.400
I mean, we've used this system
link |
02:04:20.640
within our consulting business.
link |
02:04:22.440
So we've built natural language processing
link |
02:04:24.320
and robot control and financial analysis.
link |
02:04:27.760
We've built a bunch of sort of vertical market specific
link |
02:04:31.160
proprietary AI projects.
link |
02:04:33.600
They use OpenCog on the backend,
link |
02:04:36.720
but we haven't, that's not the AGI goal, right?
link |
02:04:39.560
It's interesting, but it's not the AGI goal.
link |
02:04:42.680
So now what we're looking at with our rebuild of the system.
link |
02:04:48.520
2.0.
link |
02:04:49.360
Yeah, we're also calling it True AGI.
link |
02:04:51.400
So we're not quite sure what the name is yet.
link |
02:04:54.800
We made a website for trueagi.io,
link |
02:04:57.480
but we haven't put anything on there yet.
link |
02:04:59.840
We may come up with an even better name.
link |
02:05:02.160
It's kind of like the real AI starting point
link |
02:05:04.960
for your AGI book.
link |
02:05:05.800
Yeah, but I like True better
link |
02:05:06.920
because True has like, you can be true hearted, right?
link |
02:05:09.760
You can be true to your girlfriend.
link |
02:05:11.040
So True has a number and it also has logic in it, right?
link |
02:05:15.720
Because logic is a key part of the system.
link |
02:05:18.280
So yeah, with the True AGI system,
link |
02:05:22.400
we're sticking with the same basic architecture,
link |
02:05:25.400
but we're trying to build on what we've learned.
link |
02:05:29.640
And one thing we've learned is that,
link |
02:05:32.360
we need type checking among dependent types
link |
02:05:36.920
to be much faster
link |
02:05:38.040
and among probabilistic dependent types to be much faster.
link |
02:05:41.120
So as it is now,
link |
02:05:43.600
you can have complex types on the nodes and links.
link |
02:05:47.120
But if you wanna put,
link |
02:05:48.360
like if you want types to be first class citizens,
link |
02:05:51.280
so that you can have the types can be variables
link |
02:05:53.800
and then you do type checking
link |
02:05:55.680
among complex higher order types.
link |
02:05:58.040
You can do that in the system now, but it's very slow.
link |
02:06:00.960
This is stuff like it's done
link |
02:06:02.560
in cutting edge program languages like Agda or something,
link |
02:06:05.360
these obscure research languages.
link |
02:06:07.400
On the other hand,
link |
02:06:08.600
we've been doing a lot tying together deep neural nets
link |
02:06:11.240
with symbolic learning.
link |
02:06:12.360
So we did a project for Cisco, for example,
link |
02:06:15.200
which was on, this was street scene analysis,
link |
02:06:17.440
but they had deep neural models
link |
02:06:18.600
for a bunch of cameras watching street scenes,
link |
02:06:21.000
but they trained a different model for each camera
link |
02:06:23.400
because they couldn't get the transfer learning
link |
02:06:24.840
to work between camera A and camera B.
link |
02:06:27.040
So we took what came out of all the deep neural models
link |
02:06:29.040
for the different cameras,
link |
02:06:30.400
we fed it into an open called symbolic representation.
link |
02:06:33.440
Then we did some pattern mining and some reasoning
link |
02:06:36.280
on what came out of all the different cameras
link |
02:06:38.120
within the symbolic graph.
link |
02:06:39.480
And that worked well for that application.
link |
02:06:42.040
I mean, Hugo Latapie from Cisco gave a talk touching on that
link |
02:06:45.880
at last year's AGI conference, it was in Shenzhen.
link |
02:06:48.760
On the other hand, we learned from there,
link |
02:06:51.000
it was kind of clunky to get the deep neural models
link |
02:06:53.280
to work well with the symbolic system
link |
02:06:55.640
because we were using torch.
link |
02:06:58.560
And torch keeps a sort of state computation graph,
link |
02:07:03.560
but you needed like real time access
link |
02:07:05.280
to that computation graph within our hypergraph.
link |
02:07:07.640
And we certainly did it,
link |
02:07:10.640
Alexey Polopov who leads our St. Petersburg team
link |
02:07:13.080
wrote a great paper on cognitive modules in OpenCog
link |
02:07:16.480
explaining sort of how do you deal
link |
02:07:17.720
with the torch compute graph inside OpenCog.
link |
02:07:19.960
But in the end we realized like,
link |
02:07:22.840
that just hadn't been one of our design thoughts
link |
02:07:25.400
when we built OpenCog, right?
link |
02:07:27.240
So between wanting really fast dependent type checking
link |
02:07:30.680
and wanting much more efficient interoperation
link |
02:07:33.640
between the computation graphs
link |
02:07:35.160
of deep neural net frameworks and OpenCog's hypergraph
link |
02:07:37.720
and adding on top of that,
link |
02:07:40.000
wanting to more effectively run an OpenCog hypergraph
link |
02:07:42.480
distributed across RAM in 10,000 machines,
link |
02:07:45.200
which is we're doing dozens of machines now,
link |
02:07:47.280
but it's just not, we didn't architect it
link |
02:07:50.720
with that sort of modern scalability in mind.
link |
02:07:53.080
So these performance requirements are what have driven us
link |
02:07:56.280
to want to rearchitect the base,
link |
02:08:00.520
but the core AGI paradigm doesn't really change.
link |
02:08:05.320
Like the mathematics is the same.
link |
02:08:07.760
It's just, we can't scale to the level that we want
link |
02:08:11.440
in terms of distributed processing
link |
02:08:13.880
or speed of various kinds of processing
link |
02:08:16.280
with the current infrastructure
link |
02:08:19.160
that was built in the phase 2001 to 2008,
link |
02:08:22.880
which is hardly shocking.
link |
02:08:26.120
Well, I mean, the three things you mentioned
link |
02:08:27.880
are really interesting.
link |
02:08:28.720
So what do you think about in terms of interoperability
link |
02:08:32.320
communicating with computational graph of neural networks?
link |
02:08:36.320
What do you think about the representations
link |
02:08:38.480
that neural networks form?
link |
02:08:40.680
They're bad, but there's many ways
link |
02:08:42.920
that you could deal with that.
link |
02:08:44.360
So I've been wrestling with this a lot
link |
02:08:46.880
in some work on supervised grammar induction,
link |
02:08:49.920
and I have a simple paper on that.
link |
02:08:52.120
They'll give it the next AGI conference,
link |
02:08:55.400
online portion of which is next week, actually.
link |
02:08:58.200
What is grammar induction?
link |
02:09:00.400
So this isn't AGI either,
link |
02:09:02.560
but it's sort of on the verge
link |
02:09:05.200
between narrow AI and AGI or something.
link |
02:09:08.280
Unsupervised grammar induction is the problem.
link |
02:09:11.320
Throw your AI system, a huge body of text,
link |
02:09:15.400
and have it learn the grammar of the language
link |
02:09:18.160
that produced that text.
link |
02:09:20.280
So you're not giving it labeled examples.
link |
02:09:22.600
So you're not giving it like a thousand sentences
link |
02:09:24.440
where the parses were marked up by graduate students.
link |
02:09:27.120
So it's just got to infer the grammar from the text.
link |
02:09:30.280
It's like the Rosetta Stone, but worse, right?
link |
02:09:33.440
Because you only have the one language,
link |
02:09:35.320
and you have to figure out what is the grammar.
link |
02:09:37.160
So that's not really AGI because,
link |
02:09:41.440
I mean, the way a human learns language is not that, right?
link |
02:09:44.360
I mean, we learn from language that's used in context.
link |
02:09:47.720
So it's a social embodied thing.
link |
02:09:49.320
We see how a given sentence is grounded in observation.
link |
02:09:53.520
There's an interactive element, I guess.
link |
02:09:55.200
Yeah, yeah, yeah.
link |
02:09:56.520
On the other hand, so I'm more interested in that.
link |
02:10:00.360
I'm more interested in making an AGI system learn language
link |
02:10:02.960
from its social and embodied experience.
link |
02:10:05.560
On the other hand, that's also more of a pain to do,
link |
02:10:08.240
and that would lead us into Hanson Robotics
link |
02:10:10.640
and their robotics work I've known much.
link |
02:10:12.080
We'll talk about it in a few minutes.
link |
02:10:14.600
But just as an intellectual exercise,
link |
02:10:17.120
as a learning exercise,
link |
02:10:18.840
trying to learn grammar from a corpus
link |
02:10:22.480
is very, very interesting, right?
link |
02:10:24.560
And that's been a field in AI for a long time.
link |
02:10:27.520
No one can do it very well.
link |
02:10:29.200
So we've been looking at transformer neural networks
link |
02:10:32.080
and tree transformers, which are amazing.
link |
02:10:35.760
These came out of Google Brain, actually.
link |
02:10:39.080
And actually on that team was Lucas Kaiser,
link |
02:10:41.920
who used to work for me in the one,
link |
02:10:44.080
the period 2005 through eight or something.
link |
02:10:46.960
So it's been fun to see my former
link |
02:10:50.200
sort of AGI employees disperse and do
link |
02:10:52.760
all these amazing things.
link |
02:10:54.080
Way too many sucked into Google, actually.
link |
02:10:56.080
Well, yeah, anyway.
link |
02:10:57.640
We'll talk about that too.
link |
02:10:58.960
Lucas Kaiser and a bunch of these guys,
link |
02:11:00.640
they create transformer networks,
link |
02:11:03.200
that classic paper like attention is all you need
link |
02:11:05.480
and all these things following on from that.
link |
02:11:08.160
So we're looking at transformer networks.
link |
02:11:10.160
And like, these are able to,
link |
02:11:13.520
I mean, this is what underlies GPT2 and GPT3 and so on,
link |
02:11:16.480
which are very, very cool
link |
02:11:18.120
and have absolutely no cognitive understanding
link |
02:11:20.320
of any of the texts they're looking at.
link |
02:11:21.680
Like they're very intelligent idiots, right?
link |
02:11:24.960
So sorry to take, but this small, I'll bring this back,
link |
02:11:28.080
but do you think GPT3 understands language?
link |
02:11:31.760
No, no, it understands nothing.
link |
02:11:34.080
It's a complete idiot.
link |
02:11:35.320
But it's a brilliant idiot.
link |
02:11:36.720
You don't think GPT20 will understand language?
link |
02:11:40.520
No, no, no.
link |
02:11:42.240
So size is not gonna buy you understanding.
link |
02:11:45.160
And any more than a faster car is gonna get you to Mars.
link |
02:11:48.840
It's a completely different kind of thing.
link |
02:11:50.920
I mean, these networks are very cool.
link |
02:11:54.280
And as an entrepreneur,
link |
02:11:55.520
I can see many highly valuable uses for them.
link |
02:11:57.760
And as an artist, I love them, right?
link |
02:12:01.080
So I mean, we're using our own neural model,
link |
02:12:05.240
which is along those lines
link |
02:12:06.560
to control the Philip K. Dick robot now.
link |
02:12:09.000
And it's amazing to like train a neural model
link |
02:12:12.200
on the robot Philip K. Dick
link |
02:12:14.000
and see it come up with like crazed,
link |
02:12:15.840
stoned philosopher pronouncements,
link |
02:12:18.400
very much like what Philip K. Dick might've said, right?
link |
02:12:21.440
Like these models are super cool.
link |
02:12:24.840
And I'm working with Hanson Robotics now
link |
02:12:27.720
on using a similar, but more sophisticated one for Sophia,
link |
02:12:30.600
which we haven't launched yet.
link |
02:12:34.080
But so I think it's cool.
link |
02:12:36.080
But no, these are recognizing a large number
link |
02:12:39.480
of shallow patterns.
link |
02:12:42.200
They're not forming an abstract representation.
link |
02:12:44.840
And that's the point I was coming to
link |
02:12:47.120
when we're looking at grammar induction,
link |
02:12:50.680
we tried to mine patterns out of the structure
link |
02:12:53.520
of the transformer network.
link |
02:12:55.880
And you can, but the patterns aren't what you want.
link |
02:12:59.600
They're nasty.
link |
02:13:00.600
So I mean, if you do supervised learning,
link |
02:13:03.200
if you look at sentences where you know
link |
02:13:04.560
the correct parts of a sentence,
link |
02:13:06.520
you can learn a matrix that maps
link |
02:13:09.120
between the internal representation of the transformer
link |
02:13:12.240
and the parse of the sentence.
link |
02:13:14.120
And so then you can actually train something
link |
02:13:16.120
that will output the sentence parse
link |
02:13:18.440
from the transformer network's internal state.
link |
02:13:20.680
And we did this, I think Christopher Manning,
link |
02:13:24.720
some others have not done this also.
link |
02:13:28.080
But I mean, what you get is that the representation
link |
02:13:30.600
is hardly ugly and is scattered all over the network
link |
02:13:33.200
and doesn't look like the rules of grammar
link |
02:13:34.920
that you know are the right rules of grammar, right?
link |
02:13:37.240
It's kind of ugly.
link |
02:13:38.240
So what we're actually doing is we're using
link |
02:13:41.440
a symbolic grammar learning algorithm,
link |
02:13:44.280
but we're using the transformer neural network
link |
02:13:46.760
as a sentence probability oracle.
link |
02:13:48.880
So like if you have a rule of grammar
link |
02:13:52.120
and you aren't sure if it's a correct rule of grammar or not,
link |
02:13:54.800
you can generate a bunch of sentences
link |
02:13:56.440
using that rule of grammar
link |
02:13:58.040
and a bunch of sentences violating that rule of grammar.
link |
02:14:00.880
And you can see the transformer model
link |
02:14:04.480
doesn't think the sentences obeying the rule of grammar
link |
02:14:06.720
are more probable than the sentences
link |
02:14:08.280
disobeying the rule of grammar.
link |
02:14:10.080
So in that way, you can use the neural model
link |
02:14:11.840
as a sense probability oracle
link |
02:14:13.840
to guide a symbolic grammar learning process.
link |
02:14:19.960
And that seems to work better than trying to milk
link |
02:14:24.000
the grammar out of the neural network
link |
02:14:25.840
that doesn't have it in there.
link |
02:14:26.760
So I think the thing is these neural nets
link |
02:14:29.480
are not getting a semantically meaningful representation
link |
02:14:32.880
internally by and large.
link |
02:14:35.360
So one line of research is to try to get them to do that.
link |
02:14:38.120
And InfoGAN was trying to do that.
link |
02:14:40.000
So like if you look back like two years ago,
link |
02:14:43.040
there was all these papers on like at Edward,
link |
02:14:45.280
this probabilistic programming neural net framework
link |
02:14:47.400
that Google had, which came out of InfoGAN.
link |
02:14:49.640
So the idea there was like you could train
link |
02:14:53.720
an InfoGAN neural net model,
link |
02:14:55.600
which is a generative associative network
link |
02:14:57.200
to recognize and generate faces.
link |
02:14:59.200
And the model would automatically learn a variable
link |
02:15:02.160
for how long the nose is and automatically learn a variable
link |
02:15:04.400
for how wide the eyes are
link |
02:15:05.760
or how big the lips are or something, right?
link |
02:15:08.040
So it automatically learned these variables,
link |
02:15:11.040
which have a semantic meaning.
link |
02:15:12.480
So that was a rare case where a neural net
link |
02:15:15.320
trained with a fairly standard GAN method
link |
02:15:18.080
was able to actually learn the semantic representation.
link |
02:15:20.880
So for many years, many of us tried to take that
link |
02:15:23.240
the next step and get a GAN type neural network
link |
02:15:27.200
that would have not just a list of semantic latent variables,
link |
02:15:31.680
but would have say a Bayes net of semantic latent variables
link |
02:15:33.960
with dependencies between them.
link |
02:15:35.440
The whole programming framework Edward was made for that.
link |
02:15:38.840
I mean, no one got it to work, right?
link |
02:15:40.720
And it could be.
link |
02:15:41.560
Do you think it's possible?
link |
02:15:42.960
Yeah, do you think?
link |
02:15:43.800
I don't know.
link |
02:15:44.760
It might be that back propagation just won't work for it
link |
02:15:47.280
because the gradients are too screwed up.
link |
02:15:49.720
Maybe you could get it to work using CMAES
link |
02:15:52.000
or some like floating point evolutionary algorithm.
link |
02:15:54.840
We tried, we didn't get it to work.
link |
02:15:57.000
Eventually we just paused that rather than gave it up.
link |
02:16:01.360
We paused that and said, well, okay, let's try
link |
02:16:04.000
more innovative ways to learn implicit,
link |
02:16:08.640
to learn what are the representations implicit
link |
02:16:11.000
in that network without trying to make it grow
link |
02:16:13.640
inside that network.
link |
02:16:14.720
And I described how we're doing that in language.
link |
02:16:19.720
You can do similar things in vision, right?
link |
02:16:21.440
So what?
link |
02:16:22.280
Use it as an oracle.
link |
02:16:23.360
Yeah, yeah, yeah.
link |
02:16:24.200
So you can, that's one way is that you use
link |
02:16:26.240
a structure learning algorithm, which is symbolic.
link |
02:16:29.120
And then you use the deep neural net as an oracle
link |
02:16:32.480
to guide the structure learning algorithm.
link |
02:16:34.240
The other way to do it is like Infogam was trying to do
link |
02:16:37.880
and try to tweak the neural network
link |
02:16:40.040
to have the symbolic representation inside it.
link |
02:16:43.760
I tend to think what the brain is doing
link |
02:16:46.440
is more like using the deep neural net type thing
link |
02:16:51.680
as an oracle.
link |
02:16:52.520
I think the visual cortex or the cerebellum
link |
02:16:56.680
are probably learning a non semantically meaningful
link |
02:17:00.280
opaque tangled representation.
link |
02:17:02.400
And then when they interface with the more cognitive parts
link |
02:17:04.600
of the cortex, the cortex is sort of using those
link |
02:17:08.080
as an oracle and learning the abstract representation.
link |
02:17:10.720
So if you do sports, say take for example,
link |
02:17:13.200
serving in tennis, right?
link |
02:17:15.240
I mean, my tennis serve is okay, not great,
link |
02:17:17.680
but I learned it by trial and error, right?
link |
02:17:19.760
And I mean, I learned music by trial and error too.
link |
02:17:22.120
I just sit down and play, but then if you're an athlete,
link |
02:17:25.960
which I'm not a good athlete,
link |
02:17:27.080
I mean, then you'll watch videos of yourself serving
link |
02:17:30.360
and your coach will help you think about what you're doing
link |
02:17:32.760
and you'll then form a declarative representation,
link |
02:17:35.040
but your cerebellum maybe didn't have
link |
02:17:37.160
a declarative representation.
link |
02:17:38.640
Same way with music, like I will hear something in my head,
link |
02:17:43.560
I'll sit down and play the thing like I heard it.
link |
02:17:46.960
And then I will try to study what my fingers did
link |
02:17:51.000
to see like, what did you just play?
link |
02:17:52.760
Like how did you do that, right?
link |
02:17:55.600
Because if you're composing,
link |
02:17:57.720
you may wanna see how you did it
link |
02:17:59.720
and then declaratively morph that in some way
link |
02:18:02.680
that your fingers wouldn't think of, right?
link |
02:18:05.240
But the physiological movement may come out of some opaque,
link |
02:18:10.280
like cerebellar reinforcement learned thing, right?
link |
02:18:14.440
And so that's, I think trying to milk the structure
link |
02:18:17.680
of a neural net by treating it as an oracle,
link |
02:18:19.320
maybe more like how your declarative mind post processes
link |
02:18:23.960
what your visual or motor cortex.
link |
02:18:27.760
I mean, in vision, it's the same way,
link |
02:18:29.400
like you can recognize beautiful art
link |
02:18:34.800
much better than you can say why
link |
02:18:36.760
you think that piece of art is beautiful.
link |
02:18:38.520
But if you're trained as an art critic,
link |
02:18:40.520
you do learn to say why.
link |
02:18:41.680
And some of it's bullshit, but some of it isn't, right?
link |
02:18:44.040
Some of it is learning to map sensory knowledge
link |
02:18:46.840
into declarative and linguistic knowledge,
link |
02:18:51.120
yet without necessarily making the sensory system itself
link |
02:18:56.040
use a transparent and an easily communicable representation.
link |
02:19:00.640
Yeah, that's fascinating to think of neural networks
link |
02:19:02.960
as like dumb question answers that you can just milk
link |
02:19:08.200
to build up a knowledge base.
link |
02:19:10.920
And then it can be multiple networks, I suppose,
link |
02:19:12.680
from different.
link |
02:19:13.600
Yeah, yeah, so I think if a group like DeepMind or OpenAI
link |
02:19:18.160
were to build AGI, and I think DeepMind is like
link |
02:19:21.520
a thousand times more likely from what I could tell,
link |
02:19:25.920
because they've hired a lot of people with broad minds
link |
02:19:30.040
and many different approaches and angles on AGI,
link |
02:19:34.360
whereas OpenAI is also awesome,
link |
02:19:36.640
but I see them as more of like a pure
link |
02:19:39.040
deep reinforcement learning shop.
link |
02:19:41.160
Yeah, this time, I got you.
link |
02:19:42.000
So far. Yeah, there's a lot of,
link |
02:19:43.880
you're right, I mean, there's so much interdisciplinary
link |
02:19:48.600
work at DeepMind, like neuroscience.
link |
02:19:50.280
And you put that together with Google Brain,
link |
02:19:52.240
which granted they're not working that closely together now,
link |
02:19:54.760
but my oldest son Zarathustra is doing his PhD
link |
02:19:58.840
in machine learning applied to automated theorem proving
link |
02:20:01.640
in Prague under Josef Urban.
link |
02:20:03.840
So the first paper, DeepMath, which applied deep neural nets
link |
02:20:08.400
to guide theorem proving was out of Google Brain.
link |
02:20:10.680
I mean, by now, the automated theorem proving community
link |
02:20:14.960
is going way, way, way beyond anything Google was doing,
link |
02:20:18.360
but still, yeah, but anyway,
link |
02:20:21.120
if that community was gonna make an AGI,
link |
02:20:23.760
probably one way they would do it was,
link |
02:20:27.160
take 25 different neural modules,
link |
02:20:30.680
architected in different ways,
link |
02:20:32.040
maybe resembling different parts of the brain,
link |
02:20:33.800
like a basal ganglia model, cerebellum model,
link |
02:20:36.280
a thalamus module, a few hippocampus models,
link |
02:20:40.440
number of different models,
link |
02:20:41.480
representing parts of the cortex, right?
link |
02:20:43.680
Take all of these and then wire them together
link |
02:20:47.920
to co train and learn them together like that.
link |
02:20:52.520
That would be an approach to creating an AGI.
link |
02:20:57.240
One could implement something like that efficiently
link |
02:20:59.640
on top of our true AGI, like OpenCog 2.0 system,
link |
02:21:03.800
once it exists, although obviously Google
link |
02:21:06.640
has their own highly efficient implementation architecture.
link |
02:21:10.240
So I think that's a decent way to build AGI.
link |
02:21:13.280
I was very interested in that in the mid 90s,
link |
02:21:15.680
but I mean, the knowledge about how the brain works
link |
02:21:19.440
sort of pissed me off, like it wasn't there yet.
link |
02:21:21.520
Like, you know, in the hippocampus,
link |
02:21:23.080
you have these concept neurons,
link |
02:21:24.760
like the so called grandmother neuron,
link |
02:21:26.720
which everyone laughed at it, it's actually there.
link |
02:21:28.520
Like I have some Lex Friedman neurons
link |
02:21:31.080
that fire differentially when I see you
link |
02:21:33.280
and not when I see any other person, right?
link |
02:21:35.360
So how do these Lex Friedman neurons,
link |
02:21:38.880
how do they coordinate with the distributed representation
link |
02:21:41.400
of Lex Friedman I have in my cortex, right?
link |
02:21:44.520
There's some back and forth between cortex and hippocampus
link |
02:21:47.680
that lets these discrete symbolic representations
link |
02:21:50.120
in hippocampus correlate and cooperate
link |
02:21:53.200
with the distributed representations in cortex.
link |
02:21:55.680
This probably has to do with how the brain
link |
02:21:57.400
does its version of abstraction and quantifier logic, right?
link |
02:22:00.240
Like you can have a single neuron in the hippocampus
link |
02:22:02.640
that activates a whole distributed activation pattern
link |
02:22:05.880
in cortex, well, this may be how the brain does
link |
02:22:09.080
like symbolization and abstraction
link |
02:22:11.120
as in functional programming or something,
link |
02:22:14.280
but we can't measure it.
link |
02:22:15.360
Like we don't have enough electrodes stuck
link |
02:22:17.560
between the cortex and the hippocampus
link |
02:22:20.960
in any known experiment to measure it.
link |
02:22:23.080
So I got frustrated with that direction,
link |
02:22:26.360
not because it's impossible.
link |
02:22:27.560
Because we just don't understand enough yet.
link |
02:22:29.720
Of course, it's a valid research direction.
link |
02:22:31.760
You can try to understand more and more.
link |
02:22:33.720
And we are measuring more and more
link |
02:22:34.960
about what happens in the brain now than ever before.
link |
02:22:38.120
So it's quite interesting.
link |
02:22:40.560
On the other hand, I sort of got more
link |
02:22:43.400
of an engineering mindset about AGI.
link |
02:22:46.520
I'm like, well, okay,
link |
02:22:47.920
we don't know how the brain works that well.
link |
02:22:50.200
We don't know how birds fly that well yet either.
link |
02:22:52.360
We have no idea how a hummingbird flies
link |
02:22:54.080
in terms of the aerodynamics of it.
link |
02:22:56.280
On the other hand, we know basic principles
link |
02:22:59.280
of like flapping and pushing the air down.
link |
02:23:01.760
And we know the basic principles
link |
02:23:03.520
of how the different parts of the brain work.
link |
02:23:05.720
So let's take those basic principles
link |
02:23:07.480
and engineer something that embodies those basic principles,
link |
02:23:11.480
but is well designed for the hardware
link |
02:23:14.040
that we have on hand right now.
link |
02:23:18.080
So do you think we can create AGI
link |
02:23:20.200
before we understand how the brain works?
link |
02:23:22.440
I think that's probably what will happen.
link |
02:23:25.120
And maybe the AGI will help us do better brain imaging
link |
02:23:28.560
that will then let us build artificial humans,
link |
02:23:30.880
which is very, very interesting to us
link |
02:23:33.400
because we are humans, right?
link |
02:23:34.960
I mean, building artificial humans is super worthwhile.
link |
02:23:38.840
I just think it's probably not the shortest path to AGI.
link |
02:23:42.760
So it's fascinating idea that we would build AGI
link |
02:23:45.680
to help us understand ourselves.
link |
02:23:50.040
A lot of people ask me if the young people
link |
02:23:54.600
interested in doing artificial intelligence,
link |
02:23:56.440
they look at sort of doing graduate level, even undergrads,
link |
02:24:01.440
but graduate level research and they see
link |
02:24:04.520
whether the artificial intelligence community stands now,
link |
02:24:06.840
it's not really AGI type research for the most part.
link |
02:24:09.920
So the natural question they ask is
link |
02:24:12.080
what advice would you give?
link |
02:24:13.640
I mean, maybe I could ask if people were interested
link |
02:24:17.320
in working on OpenCog or in some kind of direct
link |
02:24:22.520
or indirect connection to OpenCog or AGI research,
link |
02:24:25.160
what would you recommend?
link |
02:24:28.040
OpenCog, first of all, is open source project.
link |
02:24:30.960
There's a Google group discussion list.
link |
02:24:35.360
There's a GitHub repository.
link |
02:24:36.760
So if anyone's interested in lending a hand
link |
02:24:39.800
with that aspect of AGI,
link |
02:24:42.600
introduce yourself on the OpenCog email list.
link |
02:24:46.000
And there's a Slack as well.
link |
02:24:47.920
I mean, we're certainly interested to have inputs
link |
02:24:53.080
into our redesign process for a new version of OpenCog,
link |
02:24:57.520
but also we're doing a lot of very interesting research.
link |
02:25:01.160
I mean, we're working on data analysis
link |
02:25:04.080
for COVID clinical trials.
link |
02:25:05.600
We're working with Hanson Robotics.
link |
02:25:06.960
We're doing a lot of cool things
link |
02:25:08.000
with the current version of OpenCog now.
link |
02:25:10.720
So there's certainly opportunity to jump into OpenCog
link |
02:25:14.720
or various other open source AGI oriented projects.
link |
02:25:18.760
So would you say there's like masters
link |
02:25:20.280
and PhD theses in there?
link |
02:25:22.080
Plenty, yeah, plenty, of course.
link |
02:25:23.960
I mean, the challenge is to find a supervisor
link |
02:25:26.920
who wants to foster that sort of research,
link |
02:25:29.720
but it's way easier than it was when I got my PhD, right?
link |
02:25:32.840
It's okay, great.
link |
02:25:33.680
We talked about OpenCog, which is kind of one,
link |
02:25:36.360
the software framework,
link |
02:25:38.000
but also the actual attempt to build an AGI system.
link |
02:25:44.160
And then there is this exciting idea of SingularityNet.
link |
02:25:48.600
So maybe can you say first what is SingularityNet?
link |
02:25:53.160
Sure, sure.
link |
02:25:54.280
SingularityNet is a platform
link |
02:25:59.040
for realizing a decentralized network
link |
02:26:05.880
of artificial intelligences.
link |
02:26:08.280
So Marvin Minsky, the AI pioneer who I knew a little bit,
link |
02:26:14.440
he had the idea of a society of minds,
link |
02:26:16.560
like you should achieve an AI
link |
02:26:18.360
not by writing one algorithm or one program,
link |
02:26:21.040
but you should put a bunch of different AIs out there
link |
02:26:24.000
and the different AIs will interact with each other,
link |
02:26:27.760
each playing their own role.
link |
02:26:29.480
And then the totality of the society of AIs
link |
02:26:32.560
would be the thing
link |
02:26:34.240
that displayed the human level intelligence.
link |
02:26:36.560
And I had, when he was alive,
link |
02:26:39.000
I had many debates with Marvin about this idea.
link |
02:26:43.000
And I think he really thought the mind
link |
02:26:49.080
was more like a society than I do.
link |
02:26:51.200
Like I think you could have a mind
link |
02:26:54.080
that was as disorganized as a human society,
link |
02:26:56.720
but I think a human like mind
link |
02:26:57.880
has a bit more central control than that actually.
link |
02:27:00.080
Like, I mean, we have this thalamus
link |
02:27:02.840
and the medulla and limbic system.
link |
02:27:04.760
We have a sort of top down control system
link |
02:27:07.960
that guides much of what we do,
link |
02:27:10.840
more so than a society does.
link |
02:27:12.760
So I think he stretched that metaphor a little too far,
link |
02:27:16.880
but I also think there's something interesting there.
link |
02:27:20.840
And so in the 90s,
link |
02:27:24.040
when I started my first sort of nonacademic AI project,
link |
02:27:27.960
WebMind, which was an AI startup in New York
link |
02:27:30.960
in the Silicon Alley area in the late 90s,
link |
02:27:34.640
what I was aiming to do there
link |
02:27:36.280
was make a distributed society of AIs,
link |
02:27:40.000
the different parts of which would live
link |
02:27:41.360
on different computers all around the world.
link |
02:27:43.640
And each one would do its own thinking
link |
02:27:45.240
about the data local to it,
link |
02:27:47.080
but they would all share information with each other
link |
02:27:48.960
and outsource work with each other and cooperate.
link |
02:27:51.320
And the intelligence would be in the whole collective.
link |
02:27:54.040
And I organized a conference together with Francis Heiligen
link |
02:27:57.680
at Free University of Brussels in 2001,
link |
02:28:00.600
which was the Global Brain Zero Conference.
link |
02:28:02.920
And we're planning the next version,
link |
02:28:04.680
the Global Brain One Conference
link |
02:28:06.920
at the Free University of Brussels for next year, 2021.
link |
02:28:10.120
So 20 years after.
link |
02:28:12.000
And then maybe we can have the next one 10 years after that,
link |
02:28:14.560
like exponentially faster until the singularity comes, right?
link |
02:28:19.320
The timing is right, yeah.
link |
02:28:20.680
Yeah, yeah, exactly.
link |
02:28:22.160
So yeah, the idea with the Global Brain
link |
02:28:25.000
was maybe the AI won't just be in a program
link |
02:28:28.120
on one guy's computer,
link |
02:28:29.560
but the AI will be in the internet as a whole
link |
02:28:32.960
with the cooperation of different AI modules
link |
02:28:35.080
living in different places.
link |
02:28:37.040
So one of the issues you face
link |
02:28:39.280
when architecting a system like that
link |
02:28:41.160
is, you know, how is the whole thing controlled?
link |
02:28:44.760
Do you have like a centralized control unit
link |
02:28:47.200
that pulls the puppet strings
link |
02:28:48.640
of all the different modules there?
link |
02:28:50.720
Or do you have a fundamentally decentralized network
link |
02:28:55.480
where the society of AIs is controlled
link |
02:28:59.320
in some democratic and self organized way,
link |
02:29:01.040
but all the AIs in that society, right?
link |
02:29:04.760
And Francis and I had different view of many things,
link |
02:29:08.680
but we both wanted to make like a global society
link |
02:29:13.680
of AI minds with a decentralized organizational mode.
link |
02:29:19.840
Now, the main difference was he wanted the individual AIs
link |
02:29:25.400
to be all incredibly simple
link |
02:29:27.440
and all the intelligence to be on the collective level.
link |
02:29:30.360
Whereas I thought that was cool,
link |
02:29:32.960
but I thought a more practical way to do it might be
link |
02:29:35.880
if some of the agents in the society of minds
link |
02:29:39.480
were fairly generally intelligent on their own.
link |
02:29:41.520
So like you could have a bunch of open cogs out there
link |
02:29:44.480
and a bunch of simpler learning systems.
link |
02:29:47.120
And then these are all cooperating, coordinating together
link |
02:29:49.840
sort of like in the brain.
link |
02:29:51.760
Okay, the brain as a whole is the general intelligence,
link |
02:29:55.320
but some parts of the cortex,
link |
02:29:56.640
you could say have a fair bit of general intelligence
link |
02:29:58.560
on their own,
link |
02:29:59.720
whereas say parts of the cerebellum or limbic system
link |
02:30:02.120
have very little general intelligence on their own.
link |
02:30:04.520
And they're contributing to general intelligence
link |
02:30:07.240
by way of their connectivity to other modules.
link |
02:30:10.880
Do you see instantiations of the same kind of,
link |
02:30:13.680
maybe different versions of open cog,
link |
02:30:15.400
but also just the same version of open cog
link |
02:30:17.320
and maybe many instantiations of it as being all parts of it?
link |
02:30:21.320
That's what David and Hans and I want to do
link |
02:30:23.040
with many Sophia and other robots.
link |
02:30:25.320
Each one has its own individual mind living on the server,
link |
02:30:29.200
but there's also a collective intelligence infusing them
link |
02:30:32.080
and a part of the mind living on the edge in each robot.
link |
02:30:35.440
So the thing is at that time,
link |
02:30:38.520
as well as WebMind being implemented in Java 1.1
link |
02:30:41.840
as like a massive distributed system,
link |
02:30:46.920
blockchain wasn't there yet.
link |
02:30:48.160
So had them do this decentralized control.
link |
02:30:51.880
We sort of knew it.
link |
02:30:52.880
We knew about distributed systems.
link |
02:30:54.360
We knew about encryption.
link |
02:30:55.760
So I mean, we had the key principles
link |
02:30:58.080
of what underlies blockchain now,
link |
02:31:00.080
but I mean, we didn't put it together
link |
02:31:01.760
in the way that it's been done now.
link |
02:31:02.880
So when Vitalik Buterin and colleagues
link |
02:31:05.360
came out with Ethereum blockchain,
link |
02:31:08.120
many, many years later, like 2013 or something,
link |
02:31:11.840
then I was like, well, this is interesting.
link |
02:31:13.920
Like this is solidity scripting language.
link |
02:31:17.000
It's kind of dorky in a way.
link |
02:31:18.520
And I don't see why you need to turn complete language
link |
02:31:21.440
for this purpose.
link |
02:31:22.440
But on the other hand,
link |
02:31:24.320
this is like the first time I could sit down
link |
02:31:27.160
and start to like script infrastructure
link |
02:31:29.920
for decentralized control of the AIs
link |
02:31:32.440
in this society of minds in a tractable way.
link |
02:31:35.240
Like you can hack the Bitcoin code base,
link |
02:31:37.200
but it's really annoying.
link |
02:31:38.520
Whereas solidity is Ethereum scripting language
link |
02:31:41.720
is just nicer and easier to use.
link |
02:31:44.440
I'm very annoyed with it by this point.
link |
02:31:45.880
But like Java, I mean, these languages are amazing
link |
02:31:49.000
when they first come out.
link |
02:31:50.920
So then I came up with the idea
link |
02:31:52.480
that turned into SingularityNet.
link |
02:31:53.840
Okay, let's make a decentralized agent system
link |
02:31:58.200
where a bunch of different AIs,
link |
02:32:00.480
wrapped up in say different Docker containers
link |
02:32:02.680
or LXC containers,
link |
02:32:04.320
different AIs can each of them have their own identity
link |
02:32:07.440
on the blockchain.
link |
02:32:08.760
And the coordination of this community of AIs
link |
02:32:11.800
has no central controller, no dictator, right?
link |
02:32:14.680
And there's no central repository of information.
link |
02:32:17.160
The coordination of the society of minds
link |
02:32:19.400
is done entirely by the decentralized network
link |
02:32:22.680
in a decentralized way by the algorithms, right?
link |
02:32:25.840
Because the model of Bitcoin is in math we trust, right?
link |
02:32:29.200
And so that's what you need.
link |
02:32:30.800
You need the society of minds to trust only in math,
link |
02:32:33.880
not trust only in one centralized server.
link |
02:32:37.720
So the AI systems themselves are outside of the blockchain,
link |
02:32:40.640
but then the communication between them.
link |
02:32:41.800
At the moment, yeah, yeah.
link |
02:32:43.960
I would have loved to put the AI's operations on chain
link |
02:32:46.880
in some sense, but in Ethereum, it's just too slow.
link |
02:32:50.480
You can't do it.
link |
02:32:52.680
Somehow it's the basic communication between AI systems.
link |
02:32:56.120
That's the distribution.
link |
02:32:58.360
Basically an AI is just some software in singularity.
link |
02:33:02.520
An AI is just some software process living in a container.
link |
02:33:05.920
And there's a proxy that lives in that container
link |
02:33:09.040
along with the AI that handles the interaction
link |
02:33:10.840
with the rest of singularity net.
link |
02:33:13.120
And then when one AI wants to contribute
link |
02:33:15.880
with another one in the network,
link |
02:33:16.920
they set up a number of channels.
link |
02:33:18.600
And the setup of those channels uses the Ethereum blockchain.
link |
02:33:22.600
Once the channels are set up,
link |
02:33:24.480
then data flows along those channels
link |
02:33:26.160
without having to be on the blockchain.
link |
02:33:29.240
All that goes on the blockchain is the fact
link |
02:33:31.080
that some data went along that channel.
link |
02:33:33.160
So you can do...
link |
02:33:34.240
So there's not a shared knowledge.
link |
02:33:38.720
Well, the identity of each agent is on the blockchain,
link |
02:33:43.160
on the Ethereum blockchain.
link |
02:33:44.800
If one agent rates the reputation of another agent,
link |
02:33:48.000
that goes on the blockchain.
link |
02:33:49.560
And agents can publish what APIs they will fulfill
link |
02:33:52.880
on the blockchain.
link |
02:33:54.520
But the actual data for AI and the results for AI
link |
02:33:58.040
is not on the blockchain.
link |
02:33:58.880
Do you think it could be?
link |
02:33:59.720
Do you think it should be?
link |
02:34:02.320
In some cases, it should be.
link |
02:34:04.120
In some cases, maybe it shouldn't be.
link |
02:34:05.880
But I mean, I think that...
link |
02:34:09.320
So I'll give you an example.
link |
02:34:10.160
Using Ethereum, you can't do it.
link |
02:34:11.640
Using now, there's more modern and faster blockchains
link |
02:34:16.640
where you could start to do that in some cases.
link |
02:34:21.920
Two years ago, that was less so.
link |
02:34:23.360
It's a very rapidly evolving ecosystem.
link |
02:34:25.640
So like one example, maybe you can comment on
link |
02:34:28.920
something I worked a lot on is autonomous vehicles.
link |
02:34:31.840
You can see each individual vehicle as an AI system.
link |
02:34:35.680
And you can see vehicles from Tesla, for example,
link |
02:34:39.600
and then Ford and GM and all these as also like larger...
link |
02:34:44.600
I mean, they all are running the same kind of system
link |
02:34:47.000
on each sets of vehicles.
link |
02:34:49.280
So it's individual AI systems and individual vehicles,
link |
02:34:52.360
but it's all different.
link |
02:34:53.800
The station is the same AI system within the same company.
link |
02:34:57.520
So you can envision a situation where all of those AI systems
link |
02:35:02.360
are put on SingularityNet, right?
link |
02:35:05.400
And how do you see that happening?
link |
02:35:10.160
And what would be the benefit?
link |
02:35:11.520
And could they share data?
link |
02:35:13.000
I guess one of the biggest things is that the power there's
link |
02:35:16.440
in a decentralized control, but the benefit would have been,
link |
02:35:20.440
is really nice if they can somehow share the knowledge
link |
02:35:24.080
in an open way if they choose to.
link |
02:35:26.280
Yeah, yeah, yeah, those are all quite good points.
link |
02:35:29.920
So I think the benefit from being on the decentralized network
link |
02:35:37.760
as we envision it is that we want the AIs in the network
link |
02:35:41.320
to be outsourcing work to each other
link |
02:35:43.800
and making API calls to each other frequently.
link |
02:35:47.440
So the real benefit would be if that AI wanted to outsource
link |
02:35:51.880
some cognitive processing or data processing
link |
02:35:54.920
or data pre processing, whatever,
link |
02:35:56.720
to some other AIs in the network,
link |
02:35:59.320
which specialize in something different.
link |
02:36:01.600
And this really requires a different way of thinking
link |
02:36:06.120
about AI software development, right?
link |
02:36:07.960
So just like object oriented programming
link |
02:36:10.320
was different than imperative programming.
link |
02:36:12.720
And now object oriented programmers all use these
link |
02:36:16.720
frameworks to do things rather than just libraries even.
link |
02:36:20.680
You know, shifting to agent based programming
link |
02:36:23.120
where AI agent is asking other like live real time
link |
02:36:26.600
evolving agents for feedback and what they're doing.
link |
02:36:29.960
That's a different way of thinking.
link |
02:36:31.480
I mean, it's not a new one.
link |
02:36:32.960
There was loads of papers on agent based programming
link |
02:36:35.320
in the 80s and onward.
link |
02:36:37.120
But if you're willing to shift to an agent based model
link |
02:36:41.520
of development, then you can put less and less in your AI
link |
02:36:45.920
and rely more and more on interactive calls
link |
02:36:48.600
to other AIs running in the network.
link |
02:36:51.440
And of course, that's not fully manifested yet
link |
02:36:54.560
because although we've rolled out a nice working version
link |
02:36:57.640
of SingularityNet platform,
link |
02:36:59.760
there's only 50 to 100 AIs running in there now.
link |
02:37:03.760
There's not tens of thousands of AIs.
link |
02:37:05.880
So we don't have the critical mass
link |
02:37:08.240
for the whole society of mind to be doing
link |
02:37:11.120
what we want to do.
link |
02:37:11.960
Yeah, the magic really happens
link |
02:37:13.400
when there's just a huge number of agents.
link |
02:37:15.320
Yeah, yeah, exactly.
link |
02:37:16.680
In terms of data, we're partnering closely
link |
02:37:19.600
with another blockchain project called Ocean Protocol.
link |
02:37:23.520
And Ocean Protocol, that's the project of Trent McConnachie
link |
02:37:27.240
who developed BigchainDB,
link |
02:37:28.720
which is a blockchain based database.
link |
02:37:30.800
So Ocean Protocol is basically blockchain based big data
link |
02:37:35.440
and aims at making it efficient for different AI processes
link |
02:37:39.440
or statistical processes or whatever
link |
02:37:41.240
to share large data sets.
link |
02:37:44.080
Or if one process can send a clone of itself
link |
02:37:46.600
to work on the other guy's data set
link |
02:37:48.200
and send results back and so forth.
link |
02:37:50.600
So by getting Ocean and you have data lake,
link |
02:37:55.560
so this is the data ocean, right?
link |
02:37:56.920
So again, by getting Ocean and SingularityNet
link |
02:37:59.760
to interoperate, we're aiming to take into account
link |
02:38:03.760
the big data aspect also.
link |
02:38:05.840
But it's quite challenging
link |
02:38:08.240
because to build this whole decentralized
link |
02:38:10.120
blockchain based infrastructure,
link |
02:38:12.400
I mean, your competitors are like Google, Microsoft,
link |
02:38:14.960
Alibaba and Amazon, which have so much money
link |
02:38:17.960
to put behind their centralized infrastructures,
link |
02:38:20.560
plus they're solving simpler algorithmic problems
link |
02:38:23.360
because making it centralized in some ways is easier, right?
link |
02:38:27.360
So they're very major computer science challenges.
link |
02:38:32.360
And I think what you saw with the whole ICO boom
link |
02:38:35.760
in the blockchain and cryptocurrency world
link |
02:38:37.880
is a lot of young hackers who were hacking Bitcoin
link |
02:38:42.040
or Ethereum, and they see, well,
link |
02:38:43.840
why don't we make this decentralized on blockchain?
link |
02:38:46.800
Then after they raised some money through an ICO,
link |
02:38:48.720
they realize how hard it is.
link |
02:38:49.880
And it's like, actually we're wrestling
link |
02:38:52.040
with incredibly hard computer science
link |
02:38:54.680
and software engineering and distributed systems problems,
link |
02:38:58.720
which can be solved, but they're just very difficult
link |
02:39:02.560
to solve.
link |
02:39:03.400
And in some cases, the individuals who started
link |
02:39:05.800
those projects were not well equipped
link |
02:39:08.760
to actually solve the problems that they wanted to solve.
link |
02:39:12.320
So you think, would you say that's the main bottleneck?
link |
02:39:14.560
If you look at the future of currency,
link |
02:39:19.560
the question is, well...
link |
02:39:21.040
Currency, the main bottleneck is politics.
link |
02:39:23.800
It's governments and the bands of armed thugs
link |
02:39:26.440
that will shoot you if you bypass their currency restriction.
link |
02:39:29.840
That's right.
link |
02:39:30.680
So like your sense is that versus the technical challenges,
link |
02:39:33.760
because you kind of just suggested
link |
02:39:34.840
the technical challenges are quite high as well.
link |
02:39:36.560
I mean, for making a distributed money,
link |
02:39:39.000
you could do that on Algorand right now.
link |
02:39:41.280
I mean, so that while Ethereum is too slow,
link |
02:39:44.760
there's Algorand and there's a few other more modern,
link |
02:39:47.240
more scalable blockchains that would work fine
link |
02:39:49.360
for a decentralized global currency.
link |
02:39:53.640
So I think there were technical bottlenecks
link |
02:39:56.480
to that two years ago.
link |
02:39:57.920
And maybe Ethereum 2.0 will be as fast as Algorand.
link |
02:40:00.760
I don't know, that's not fully written yet, right?
link |
02:40:04.160
So I think the obstacle to currency
link |
02:40:07.520
being put on the blockchain is that...
link |
02:40:09.400
Is the other stuff you mentioned.
link |
02:40:10.240
I mean, currency will be on the blockchain.
link |
02:40:11.760
It'll just be on the blockchain in a way
link |
02:40:13.840
that enforces centralized control
link |
02:40:16.520
and government hedge money rather than otherwise.
link |
02:40:18.320
Like the ERNB will probably be the first global,
link |
02:40:20.920
the first currency on the blockchain.
link |
02:40:22.200
The EURUBIL maybe next.
link |
02:40:23.360
There are any...
link |
02:40:24.200
EURUBIL?
link |
02:40:25.040
Yeah, yeah, yeah.
link |
02:40:25.860
I mean, the point is...
link |
02:40:26.700
Oh, that's hilarious.
link |
02:40:27.540
Digital currency, you know, makes total sense,
link |
02:40:30.720
but they would rather do it in the way
link |
02:40:32.160
that Putin and Xi Jinping have access
link |
02:40:34.720
to the global keys for everything, right?
link |
02:40:37.840
So, and then the analogy to that in terms of SingularityNet,
link |
02:40:42.040
I mean, there's Echoes.
link |
02:40:43.600
I think you've mentioned before that Linux gives you hope.
link |
02:40:47.200
AI is not as heavily regulated as money, right?
link |
02:40:49.960
Not yet, right?
link |
02:40:51.000
Not yet.
link |
02:40:52.000
Oh, that's a lot slipperier than money too, right?
link |
02:40:54.240
I mean, money is easier to regulate
link |
02:40:58.280
because it's kind of easier to define,
link |
02:41:00.800
whereas AI is, it's almost everywhere inside everything.
link |
02:41:04.120
Where's the boundary between AI and software, right?
link |
02:41:06.440
I mean, if you're gonna regulate AI,
link |
02:41:09.200
there's no IQ test for every hardware device
link |
02:41:11.720
that has a learning algorithm.
link |
02:41:12.800
You're gonna be putting like hegemonic regulation
link |
02:41:15.720
on all software.
link |
02:41:16.760
And I don't rule out that that can happen.
link |
02:41:18.880
And the adaptive software.
link |
02:41:21.060
Yeah, but how do you tell if a software is adaptive
link |
02:41:23.360
and what, every software is gonna be adaptive, I mean.
link |
02:41:26.100
Or maybe they, maybe the, you know,
link |
02:41:28.800
maybe we're living in the golden age of open source
link |
02:41:31.120
that will not always be open.
link |
02:41:33.360
Maybe it'll become centralized control
link |
02:41:35.640
of software by governments.
link |
02:41:37.020
It is entirely possible.
link |
02:41:38.840
And part of what I think we're doing
link |
02:41:42.200
with things like SingularityNet protocol
link |
02:41:45.220
is creating a tool set that can be used
link |
02:41:50.220
to counteract that sort of thing.
link |
02:41:52.740
Say a similar thing about mesh networking, right?
link |
02:41:55.620
Plays a minor role now, the ability to access internet
link |
02:41:59.060
like directly phone to phone.
link |
02:42:01.000
On the other hand, if your government starts trying
link |
02:42:03.740
to control your use of the internet,
link |
02:42:06.060
suddenly having mesh networking there
link |
02:42:09.220
can be very convenient, right?
link |
02:42:10.800
And so right now, something like a decentralized
link |
02:42:15.360
blockchain based AGI framework or narrow AI framework,
link |
02:42:20.300
it's cool, it's nice to have.
link |
02:42:22.660
On the other hand, if governments start trying
link |
02:42:25.140
to tap down on my AI interoperating
link |
02:42:28.740
with someone's AI in Russia or somewhere, right?
link |
02:42:31.460
Then suddenly having a decentralized protocol
link |
02:42:35.500
that nobody owns or controls
link |
02:42:37.940
becomes an extremely valuable part of the tool set.
link |
02:42:41.180
And, you know, we've put that out there now.
link |
02:42:43.780
It's not perfect, but it operates.
link |
02:42:46.980
And, you know, it's pretty blockchain agnostic.
link |
02:42:51.100
So we're talking to Algorand about making part
link |
02:42:53.420
of SingularityNet run on Algorand.
link |
02:42:56.220
My good friend Tufi Saliba has a cool blockchain project
link |
02:43:00.060
called Toda, which is a blockchain
link |
02:43:02.220
without a distributed ledger.
link |
02:43:03.540
It's like a whole other architecture.
link |
02:43:05.180
So there's a lot of more advanced things you can do
link |
02:43:08.300
in the blockchain world.
link |
02:43:09.820
SingularityNet could be ported to a whole bunch of,
link |
02:43:13.500
it could be made multi chain important
link |
02:43:14.980
to a whole bunch of different blockchains.
link |
02:43:17.100
And there's a lot of potential and a lot of importance
link |
02:43:21.540
to putting this kind of tool set out there.
link |
02:43:23.620
If you compare to OpenCog, what you could see is
link |
02:43:26.660
OpenCog allows tight integration of a few AI algorithms
link |
02:43:32.220
that share the same knowledge store in real time, in RAM.
link |
02:43:36.860
SingularityNet allows loose integration
link |
02:43:40.900
of multiple different AIs.
link |
02:43:42.660
They can share knowledge, but they're mostly not gonna
link |
02:43:45.620
be sharing knowledge in RAM on the same machine.
link |
02:43:49.980
And I think what we're gonna have is a network
link |
02:43:53.060
of network of networks, right?
link |
02:43:54.500
Like, I mean, you have the knowledge graph
link |
02:43:57.260
inside the OpenCog system,
link |
02:44:00.900
and then you have a network of machines
link |
02:44:03.220
inside a distributed OpenCog mind,
link |
02:44:05.900
but then that OpenCog will interface with other AIs
link |
02:44:10.260
doing deep neural nets or custom biology data analysis
link |
02:44:14.420
or whatever they're doing in SingularityNet,
link |
02:44:17.620
which is a looser integration of different AIs,
link |
02:44:21.020
some of which may be their own networks, right?
link |
02:44:24.060
And I think at a very loose analogy,
link |
02:44:27.900
you could see that in the human body.
link |
02:44:29.380
Like the brain has regions like cortex or hippocampus,
link |
02:44:33.820
which tightly interconnects like cortical columns
link |
02:44:36.820
within the cortex, for example.
link |
02:44:39.140
Then there's looser connection
link |
02:44:40.860
within the different lobes of the brain,
link |
02:44:42.700
and then the brain interconnects with the endocrine system
link |
02:44:45.020
and different parts of the body even more loosely.
link |
02:44:48.260
Then your body interacts even more loosely
link |
02:44:50.780
with the other people that you talk to.
link |
02:44:53.300
So you often have networks within networks within networks
link |
02:44:56.460
with progressively looser coupling
link |
02:44:59.340
as you get higher up in that hierarchy.
link |
02:45:02.740
I mean, you have that in biology,
link |
02:45:03.860
you have that in the internet as a just networking medium.
link |
02:45:08.180
And I think that's what we're gonna have
link |
02:45:10.940
in the network of software processes leading to AGI.
link |
02:45:15.940
That's a beautiful way to see the world.
link |
02:45:17.780
Again, the same similar question is with OpenCog.
link |
02:45:21.900
If somebody wanted to build an AI system
link |
02:45:24.620
and plug into the SingularityNet,
link |
02:45:27.020
what would you recommend?
link |
02:45:28.620
Yeah, so that's much easier.
link |
02:45:30.180
I mean, OpenCog is still a research system.
link |
02:45:33.860
So it takes some expertise to, and sometimes,
link |
02:45:36.660
we have tutorials, but it's somewhat cognitively
link |
02:45:40.220
labor intensive to get up to speed on OpenCog.
link |
02:45:44.340
And I mean, what's one of the things we hope to change
link |
02:45:46.620
with the true AGI OpenCog 2.0 version
link |
02:45:49.900
is just make the learning curve more similar
link |
02:45:52.740
to TensorFlow or Torch or something.
link |
02:45:54.620
Right now, OpenCog is amazingly powerful,
link |
02:45:57.340
but not simple to deal with.
link |
02:46:00.620
On the other hand, SingularityNet,
link |
02:46:03.700
as an open platform was developed a little more
link |
02:46:08.260
with usability in mind over the blockchain,
link |
02:46:10.580
it's still kind of a pain.
link |
02:46:11.660
So I mean, if you're a command line guy,
link |
02:46:14.940
there's a command line interface.
link |
02:46:16.180
It's quite easy to take any AI that has an API
link |
02:46:20.060
and lives in a Docker container and put it online anywhere.
link |
02:46:23.540
And then it joins the global SingularityNet.
link |
02:46:25.740
And anyone who puts a request for services
link |
02:46:28.980
out into the SingularityNet,
link |
02:46:30.180
the peer to peer discovery mechanism will find
link |
02:46:32.340
your AI and if it does what was asked,
link |
02:46:35.740
it can then start a conversation with your AI
link |
02:46:38.980
about whether it wants to ask your AI to do something for it,
link |
02:46:42.180
how much it would cost and so on.
link |
02:46:43.580
So that's fairly simple.
link |
02:46:46.860
If you wrote an AI and want it listed
link |
02:46:50.380
on like official SingularityNet marketplace,
link |
02:46:53.020
which is on our website,
link |
02:46:55.140
then we have a publisher portal
link |
02:46:57.820
and then there's a KYC process to go through
link |
02:47:00.220
because then we have some legal liability
link |
02:47:02.420
for what goes on that website.
link |
02:47:04.700
So in a way that's been an education too.
link |
02:47:07.340
There's sort of two layers.
link |
02:47:08.420
Like there's the open decentralized protocol.
link |
02:47:11.700
And there's the market.
link |
02:47:12.980
Yeah, anyone can use the open decentralized protocol.
link |
02:47:15.540
So say some developers from Iran
link |
02:47:17.980
and there's brilliant AI guys
link |
02:47:19.460
in University of Isfahan in Tehran,
link |
02:47:21.780
they can put their stuff on SingularityNet protocol
link |
02:47:24.660
and just like they can put something on the internet, right?
link |
02:47:27.100
I don't control it.
link |
02:47:28.460
But if we're gonna list something
link |
02:47:29.740
on the SingularityNet marketplace
link |
02:47:32.020
and put a little picture and a link to it,
link |
02:47:34.300
then if I put some Iranian AI geniuses code on there,
link |
02:47:38.860
then Donald Trump can send a bunch of jackbooted thugs
link |
02:47:41.500
to my house to arrest me for doing business with Iran, right?
link |
02:47:45.300
So, I mean, we already see in some ways
link |
02:47:48.980
the value of having a decentralized protocol
link |
02:47:51.100
because what I hope is that someone in Iran
link |
02:47:53.740
will put online an Iranian SingularityNet marketplace, right?
link |
02:47:57.340
Which you can pay in the cryptographic token,
link |
02:47:59.700
which is not owned by any country.
link |
02:48:01.540
And then if you're in like Congo or somewhere
link |
02:48:04.620
that doesn't have any problem with Iran,
link |
02:48:06.780
you can subcontract AI services
link |
02:48:09.220
that you find on that marketplace, right?
link |
02:48:11.980
Even though US citizens can't by US law.
link |
02:48:16.060
So right now, that's kind of a minor point.
link |
02:48:20.140
As you alluded, if regulations go in the wrong direction,
link |
02:48:24.020
it could become more of a major point.
link |
02:48:25.540
But I think it also is the case
link |
02:48:28.060
that having these workarounds to regulations in place
link |
02:48:31.860
is a defense mechanism against those regulations
link |
02:48:35.180
being put into place.
link |
02:48:36.660
And you can see that in the music industry, right?
link |
02:48:39.220
I mean, Napster just happened and BitTorrent just happened.
link |
02:48:43.020
And now most people in my kid's generation,
link |
02:48:45.980
they're baffled by the idea of paying for music, right?
link |
02:48:48.500
I mean, my dad pays for music.
link |
02:48:51.380
I mean, but that because these decentralized mechanisms
link |
02:48:55.700
happened and then the regulations followed, right?
link |
02:48:58.940
And the regulations would be very different
link |
02:49:01.220
if they'd been put into place before there was Napster
link |
02:49:04.380
and BitTorrent and so forth.
link |
02:49:05.500
So in the same way, we gotta put AI out there
link |
02:49:08.620
in a decentralized vein and big data out there
link |
02:49:11.060
in a decentralized vein now,
link |
02:49:13.780
so that the most advanced AI in the world
link |
02:49:16.300
is fundamentally decentralized.
link |
02:49:18.300
And if that's the case, that's just the reality
link |
02:49:20.940
the regulators have to deal with.
link |
02:49:23.740
And then as in the music case,
link |
02:49:25.460
they're gonna come up with regulations
link |
02:49:27.460
that sort of work with the decentralized reality.
link |
02:49:32.860
Beautiful.
link |
02:49:34.020
You are the chief scientist of Hanson Robotics.
link |
02:49:37.980
You're still involved with Hanson Robotics,
link |
02:49:40.500
doing a lot of really interesting stuff there.
link |
02:49:42.740
This is for people who don't know the company
link |
02:49:44.500
that created Sophia the Robot.
link |
02:49:47.380
Can you tell me who Sophia is?
link |
02:49:51.460
I'd rather start by telling you who David Hanson is.
link |
02:49:54.140
Because David is the brilliant mind behind the Sophia Robot.
link |
02:49:58.780
And he remains, so far, he remains more interesting
link |
02:50:01.980
than his creation, although she may be improving
link |
02:50:05.900
faster than he is, actually.
link |
02:50:07.380
I mean, he's a...
link |
02:50:08.780
So yeah, I met David maybe 2007 or something
link |
02:50:15.300
at some futurist conference we were both speaking at.
link |
02:50:18.420
And I could see we had a great deal in common.
link |
02:50:22.860
I mean, we were both kind of crazy,
link |
02:50:25.020
but we both had a passion for AGI and the singularity.
link |
02:50:31.540
And we were both huge fans of the work
link |
02:50:33.580
of Philip K. Dick, the science fiction writer.
link |
02:50:36.900
And I wanted to create benevolent AGI
link |
02:50:40.780
that would create massively better life
link |
02:50:44.820
for all humans and all sentient beings,
link |
02:50:47.580
including animals, plants, and superhuman beings.
link |
02:50:50.060
And David, he wanted exactly the same thing,
link |
02:50:53.780
but he had a different idea of how to do it.
link |
02:50:56.380
He wanted to get computational compassion.
link |
02:50:59.420
Like he wanted to get machines that would love people
link |
02:51:03.940
and empathize with people.
link |
02:51:05.820
And he thought the way to do that was to make a machine
link |
02:51:08.220
that could look people eye to eye, face to face,
link |
02:51:12.220
look at people and make people love the machine,
link |
02:51:15.700
and the machine loves the people back.
link |
02:51:17.540
So I thought that was very different way of looking at it
link |
02:51:21.500
because I'm very math oriented.
link |
02:51:22.940
And I'm just thinking like,
link |
02:51:24.740
what is the abstract cognitive algorithm
link |
02:51:28.100
that will let the system, you know,
link |
02:51:29.420
internalize the complex patterns of human values,
link |
02:51:32.580
blah, blah, blah.
link |
02:51:33.420
Whereas he's like, look you in the face and the eye
link |
02:51:35.980
and love you, right?
link |
02:51:37.380
So we hit it off quite well.
link |
02:51:41.340
And we talked to each other off and on.
link |
02:51:44.460
Then I moved to Hong Kong in 2011.
link |
02:51:49.380
So I've been living all over the place.
link |
02:51:53.380
I've been in Australia and New Zealand in my academic career.
link |
02:51:56.780
Then in Las Vegas for a while.
link |
02:51:59.380
Was in New York in the late 90s
link |
02:52:00.860
starting my entrepreneurial career.
link |
02:52:03.660
Was in DC for nine years
link |
02:52:05.020
doing a bunch of US government consulting stuff.
link |
02:52:07.940
Then moved to Hong Kong in 2011,
link |
02:52:12.060
mostly because I met a Chinese girl
link |
02:52:13.900
who I fell in love with and we got married.
link |
02:52:16.060
She's actually not from Hong Kong.
link |
02:52:17.380
She's from mainland China,
link |
02:52:18.380
but we converged together in Hong Kong.
link |
02:52:21.340
Still married now, I have a two year old baby.
link |
02:52:24.180
So went to Hong Kong to see about a girl, I guess.
link |
02:52:26.820
Yeah, pretty much, yeah.
link |
02:52:29.060
And on the other hand,
link |
02:52:31.060
I started doing some cool research there
link |
02:52:33.100
with Gino Yu at Hong Kong Polytechnic University.
link |
02:52:36.540
I got involved with a project called IDEA
link |
02:52:38.300
using machine learning for stock and futures prediction,
link |
02:52:41.220
which was quite interesting.
link |
02:52:43.140
And I also got to know something
link |
02:52:45.100
about the consumer electronics
link |
02:52:47.420
and hardware manufacturer ecosystem in Shenzhen
link |
02:52:50.220
across the border,
link |
02:52:51.060
which is like the only place in the world
link |
02:52:53.260
that makes sense to make complex consumer electronics
link |
02:52:56.500
at large scale and low cost.
link |
02:52:57.860
It's just, it's astounding the hardware ecosystem
link |
02:53:00.900
that you have in South China.
link |
02:53:03.220
Like US people here cannot imagine what it's like.
link |
02:53:07.220
So David was starting to explore that also.
link |
02:53:12.060
I invited him to Hong Kong to give a talk
link |
02:53:13.860
at Hong Kong PolyU,
link |
02:53:15.660
and I introduced him in Hong Kong to some investors
link |
02:53:19.220
who were interested in his robots.
link |
02:53:21.580
And he didn't have Sophia then,
link |
02:53:23.540
he had a robot of Philip K. Dick,
link |
02:53:25.140
our favorite science fiction writer.
link |
02:53:26.980
He had a robot Einstein,
link |
02:53:28.180
he had some little toy robots
link |
02:53:29.540
that looked like his son Zeno.
link |
02:53:31.940
So through the investors I connected him to,
link |
02:53:35.620
he managed to get some funding
link |
02:53:37.500
to basically port Hanson Robotics to Hong Kong.
link |
02:53:40.660
And when he first moved to Hong Kong,
link |
02:53:42.660
I was working on AGI research
link |
02:53:45.300
and also on this machine learning trading project.
link |
02:53:49.340
So I didn't get that tightly involved
link |
02:53:50.940
with Hanson Robotics.
link |
02:53:52.980
But as I hung out with David more and more,
link |
02:53:56.540
as we were both there in the same place,
link |
02:53:59.180
I started to get,
link |
02:54:01.260
I started to think about what you could do
link |
02:54:04.620
to make his robots smarter than they were.
link |
02:54:08.500
And so we started working together
link |
02:54:10.340
and for a few years I was chief scientist
link |
02:54:12.780
and head of software at Hanson Robotics.
link |
02:54:15.740
Then when I got deeply into the blockchain side of things,
link |
02:54:19.420
I stepped back from that and cofounded Singularity Net.
link |
02:54:24.340
David Hanson was also one of the cofounders
link |
02:54:26.340
of Singularity Net.
link |
02:54:27.780
So part of our goal there had been
link |
02:54:30.060
to make the blockchain based like cloud mind platform
link |
02:54:33.940
for Sophia and the other Hanson robots.
link |
02:54:37.020
Sophia would be just one of the robots in Singularity Net.
link |
02:54:41.780
Yeah, yeah, yeah, exactly.
link |
02:54:43.300
Sophia, many copies of the Sophia robot
link |
02:54:47.380
would be among the user interfaces
link |
02:54:51.500
to the globally distributed Singularity Net cloud mind.
link |
02:54:54.420
And I mean, David and I talked about that
link |
02:54:57.140
for quite a while before cofounding Singularity Net.
link |
02:55:01.540
By the way, in his vision and your vision,
link |
02:55:04.380
was Sophia tightly coupled to a particular AI system
link |
02:55:09.580
or was the idea that you can plug,
link |
02:55:11.660
you could just keep plugging in different AI systems
link |
02:55:14.140
within the head of it?
link |
02:55:15.100
David's view was always that Sophia would be a platform,
link |
02:55:22.940
much like say the Pepper robot is a platform from SoftBank.
link |
02:55:26.820
Should be a platform with a set of nicely designed APIs
link |
02:55:31.660
that anyone can use to experiment
link |
02:55:33.540
with their different AI algorithms on that platform.
link |
02:55:38.620
And Singularity Net, of course, fits right into that, right?
link |
02:55:41.580
Because Singularity Net, it's an API marketplace.
link |
02:55:44.060
So anyone can put their AI on there.
link |
02:55:46.220
OpenCog is a little bit different.
link |
02:55:49.020
I mean, David likes it, but I'd say it's my thing.
link |
02:55:52.140
It's not his.
link |
02:55:52.980
Like David has a little more passion
link |
02:55:55.100
for biologically based approaches to AI than I do,
link |
02:55:58.700
which makes sense.
link |
02:56:00.140
I mean, he's really into human physiology and biology.
link |
02:56:02.860
He's a character sculptor, right?
link |
02:56:05.140
So yeah, he's interested in,
link |
02:56:07.860
but he also worked a lot with rule based
link |
02:56:09.700
and logic based AI systems too.
link |
02:56:11.420
So yeah, he's interested in not just Sophia,
link |
02:56:14.860
but all the Hanson robots as a powerful social
link |
02:56:17.780
and emotional robotics platform.
link |
02:56:21.220
And what I saw in Sophia was a way to get AI algorithms
link |
02:56:26.220
was a way to get AI algorithms out there
link |
02:56:32.140
in front of a whole lot of different people
link |
02:56:34.660
in an emotionally compelling way.
link |
02:56:36.300
And part of my thought was really kind of abstract
link |
02:56:39.820
connected to AGI ethics.
link |
02:56:41.740
And many people are concerned AGI is gonna enslave everybody
link |
02:56:46.940
or turn everybody into computronium
link |
02:56:50.060
to make extra hard drives for their cognitive engine
link |
02:56:54.740
or whatever.
link |
02:56:55.580
And emotionally I'm not driven to that sort of paranoia.
link |
02:57:01.660
I'm really just an optimist by nature,
link |
02:57:04.100
but intellectually I have to assign a non zero probability
link |
02:57:09.220
to those sorts of nasty outcomes.
link |
02:57:12.140
Cause if you're making something 10 times as smart as you,
link |
02:57:14.900
how can you know what it's gonna do?
link |
02:57:16.300
There's an irreducible uncertainty there
link |
02:57:19.780
just as my dog can't predict what I'm gonna do tomorrow.
link |
02:57:22.780
So it seemed to me that based on our current state
link |
02:57:26.420
of knowledge, the best way to bias the AGI as we create
link |
02:57:32.500
toward benevolence would be to infuse them with love
link |
02:57:38.820
and compassion the way that we do our own children.
link |
02:57:41.620
So you want to interact with AIs in the context
link |
02:57:45.820
of doing compassionate, loving and beneficial things.
link |
02:57:49.900
And in that way, as your children will learn
link |
02:57:52.140
by doing compassionate, beneficial,
link |
02:57:53.740
loving things alongside you.
link |
02:57:55.940
And that way the AI will learn in practice
link |
02:57:58.660
what it means to be compassionate, beneficial and loving.
link |
02:58:02.340
It will get a sort of ingrained intuitive sense of this,
link |
02:58:06.380
which it can then abstract in its own way
link |
02:58:09.260
as it gets more and more intelligent.
link |
02:58:11.180
Now, David saw this the same way.
link |
02:58:12.780
That's why he came up with the name Sophia,
link |
02:58:15.540
which means wisdom.
link |
02:58:18.140
So it seemed to me making these beautiful, loving robots
link |
02:58:22.780
to be rolled out for beneficial applications
link |
02:58:26.060
would be the perfect way to roll out early stage AGI systems
link |
02:58:31.260
so they can learn from people
link |
02:58:33.940
and not just learn factual knowledge,
link |
02:58:35.420
but learn human values and ethics from people
link |
02:58:38.580
while being their home service robots,
link |
02:58:41.540
their education assistants, their nursing robots.
link |
02:58:44.100
So that was the grand vision.
link |
02:58:46.060
Now, if you've ever worked with robots,
link |
02:58:48.620
the reality is quite different, right?
link |
02:58:50.420
Like the first principle is the robot is always broken.
link |
02:58:53.220
I mean, I worked with robots in the 90s a bunch
link |
02:58:57.660
when you had to solder them together yourself
link |
02:58:59.540
and I'd put neural nets during reinforcement learning
link |
02:59:02.580
on like overturned solid ball type robots
link |
02:59:05.940
and in the 90s when I was a professor.
link |
02:59:09.300
Things of course advanced a lot, but...
link |
02:59:12.020
But the principle still holds.
link |
02:59:13.180
The principle that the robot's always broken still holds.
link |
02:59:16.500
Yeah, so faced with the reality of making Sophia do stuff,
link |
02:59:21.020
many of my robo AGI aspirations were temporarily cast aside.
link |
02:59:26.620
And I mean, there's just a practical problem
link |
02:59:30.660
of making this robot interact in a meaningful way
link |
02:59:33.700
because like, you put nice computer vision on there,
link |
02:59:36.700
but there's always glare.
link |
02:59:38.140
And then, or you have a dialogue system,
link |
02:59:41.420
but at the time I was there,
link |
02:59:43.740
like no speech to text algorithm could deal
link |
02:59:46.580
with Hong Kongese people's English accents.
link |
02:59:49.780
So the speech to text was always bad.
link |
02:59:51.620
So the robot always sounded stupid
link |
02:59:53.620
because it wasn't getting the right text, right?
link |
02:59:55.620
So I started to view that really
link |
02:59:58.020
as what in software engineering you call a walking skeleton,
link |
03:00:02.820
which is maybe the wrong metaphor to use for Sophia
link |
03:00:05.420
or maybe the right one.
link |
03:00:06.980
I mean, where the walking skeleton is
link |
03:00:08.420
in software development is
link |
03:00:10.620
if you're building a complex system, how do you get started?
link |
03:00:14.020
But one way is to first build part one well,
link |
03:00:16.140
then build part two well, then build part three well
link |
03:00:18.340
and so on.
link |
03:00:19.260
And the other way is you make like a simple version
link |
03:00:22.060
of the whole system and put something in the place
link |
03:00:24.820
of every part the whole system will need
link |
03:00:27.300
so that you have a whole system that does something.
link |
03:00:29.660
And then you work on improving each part
link |
03:00:31.900
in the context of that whole integrated system.
link |
03:00:34.340
So that's what we did on a software level in Sophia.
link |
03:00:38.140
We made like a walking skeleton software system
link |
03:00:41.580
where so there's something that sees,
link |
03:00:43.100
there's something that hears, there's something that moves,
link |
03:00:46.220
there's something that remembers,
link |
03:00:48.180
there's something that learns.
link |
03:00:49.980
You put a simple version of each thing in there
link |
03:00:52.460
and you connect them all together
link |
03:00:54.420
so that the system will do its thing.
link |
03:00:56.660
So there's a lot of AI in there.
link |
03:00:59.660
There's not any AGI in there.
link |
03:01:01.380
I mean, there's computer vision to recognize people's faces,
link |
03:01:04.660
recognize when someone comes in the room and leaves,
link |
03:01:07.660
trying to recognize whether two people are together or not.
link |
03:01:10.740
I mean, the dialogue system,
link |
03:01:13.300
it's a mix of like hand coded rules with deep neural nets
link |
03:01:18.780
that come up with their own responses.
link |
03:01:21.580
And there's some attempt to have a narrative structure
link |
03:01:25.660
and sort of try to pull the conversation
link |
03:01:28.420
into something with a beginning, middle and end
link |
03:01:30.780
and this sort of story arc.
link |
03:01:32.180
So it's...
link |
03:01:33.500
I mean, like if you look at the Lobner Prize and the systems
link |
03:01:37.620
that beat the Turing Test currently,
link |
03:01:39.060
they're heavily rule based
link |
03:01:40.540
because like you had said, narrative structure
link |
03:01:43.900
to create compelling conversations,
link |
03:01:45.700
you currently, neural networks cannot do that well,
link |
03:01:48.420
even with Google MENA.
link |
03:01:50.660
When you actually look at full scale conversations,
link |
03:01:53.060
it's just not...
link |
03:01:53.900
Yeah, this is the thing.
link |
03:01:54.740
So we've been, I've actually been running an experiment
link |
03:01:57.900
the last couple of weeks taking Sophia's chat bot
link |
03:02:01.420
and then Facebook's Transformer chat bot,
link |
03:02:03.740
which they opened the model.
link |
03:02:05.260
We've had them chatting to each other
link |
03:02:06.780
for a number of weeks on the server just...
link |
03:02:08.860
That's funny.
link |
03:02:10.020
We're generating training data of what Sophia says
link |
03:02:13.260
in a wide variety of conversations.
link |
03:02:15.500
But we can see, compared to Sophia's current chat bot,
link |
03:02:20.260
the Facebook deep neural chat bot comes up
link |
03:02:23.460
with a wider variety of fluent sounding sentences.
link |
03:02:27.300
On the other hand, it rambles like mad.
link |
03:02:30.100
The Sophia chat bot, it's a little more repetitive
link |
03:02:33.900
in the sentence structures it uses.
link |
03:02:36.620
On the other hand, it's able to keep like a conversation arc
link |
03:02:39.820
over a much longer, longer period, right?
link |
03:02:42.460
So there...
link |
03:02:43.300
Now, you can probably surmount that using Reformer
link |
03:02:46.620
and like using various other deep neural architectures
link |
03:02:51.140
to improve the way these Transformer models are trained.
link |
03:02:53.980
But in the end, neither one of them really understands
link |
03:02:58.300
what's going on.
link |
03:02:59.140
I mean, that's the challenge I had with Sophia
link |
03:03:02.660
is if I were doing a robotics project aimed at AGI,
link |
03:03:08.340
I would wanna make like a robo toddler
link |
03:03:10.100
that was just learning about what it was seeing.
link |
03:03:11.940
Because then the language is grounded
link |
03:03:13.220
in the experience of the robot.
link |
03:03:14.940
But what Sophia needs to do to be Sophia
link |
03:03:17.740
is talk about sports or the weather or robotics
link |
03:03:21.420
or the conference she's talking at.
link |
03:03:24.100
She needs to be fluent talking about
link |
03:03:26.380
any damn thing in the world.
link |
03:03:28.420
And she doesn't have grounding for all those things.
link |
03:03:32.500
So there's this, just like, I mean, Google Mina
link |
03:03:35.700
and Facebook's chat, but I don't have grounding
link |
03:03:37.460
for what they're talking about either.
link |
03:03:40.140
So in a way, the need to speak fluently about things
link |
03:03:45.060
where there's no nonlinguistic grounding
link |
03:03:47.940
pushes what you can do for Sophia in the short term
link |
03:03:53.660
a bit away from AGI.
link |
03:03:56.340
I mean, it pushes you towards IBM Watson situation
link |
03:04:00.900
where you basically have to do heuristic
link |
03:04:02.740
and hard code stuff and rule based stuff.
link |
03:04:05.380
I have to ask you about this, okay.
link |
03:04:07.860
So because in part Sophia is like an art creation
link |
03:04:18.860
because it's beautiful.
link |
03:04:21.260
She's beautiful because she inspires
link |
03:04:24.780
through our human nature of anthropomorphize things.
link |
03:04:29.540
We immediately see an intelligent being there.
link |
03:04:32.620
Because David is a great sculptor.
link |
03:04:34.100
He is a great sculptor, that's right.
link |
03:04:35.500
So in fact, if Sophia just had nothing inside her head,
link |
03:04:40.820
said nothing, if she just sat there,
link |
03:04:43.260
we already prescribed some intelligence to her.
link |
03:04:45.940
There's a long selfie line in front of her
link |
03:04:47.780
after every talk.
link |
03:04:48.740
That's right.
link |
03:04:49.940
So it captivated the imagination of many people.
link |
03:04:53.820
I wasn't gonna say the world,
link |
03:04:54.860
but yeah, I mean a lot of people.
link |
03:04:58.180
Billions of people, which is amazing.
link |
03:05:00.180
It's amazing, right.
link |
03:05:01.940
Now, of course, many people have prescribed
link |
03:05:08.260
essentially AGI type of capabilities to Sophia
link |
03:05:11.060
when they see her.
link |
03:05:12.380
And of course, friendly French folk like Yann LeCun
link |
03:05:19.860
immediately see that of the people from the AI community
link |
03:05:22.820
and get really frustrated because...
link |
03:05:25.900
It's understandable.
link |
03:05:27.060
So what, and then they criticize people like you
link |
03:05:31.700
who sit back and don't say anything about,
link |
03:05:36.100
like basically allow the imagination of the world,
link |
03:05:39.980
allow the world to continue being captivated.
link |
03:05:43.860
So what's your sense of that kind of annoyance
link |
03:05:49.140
that the AI community has?
link |
03:05:51.220
I think there's several parts to my reaction there.
link |
03:05:55.380
First of all, if I weren't involved with Hanson and Box
link |
03:05:59.820
and didn't know David Hanson personally,
link |
03:06:03.420
I probably would have been very annoyed initially
link |
03:06:06.420
at Sophia as well.
link |
03:06:07.980
I mean, I can understand the reaction.
link |
03:06:09.460
I would have been like, wait,
link |
03:06:11.820
all these stupid people out there think this is an AGI,
link |
03:06:16.260
but it's not an AGI, but they're tricking people
link |
03:06:19.980
that this very cool robot is an AGI.
link |
03:06:23.060
And now those of us trying to raise funding to build AGI,
link |
03:06:28.180
people will think it's already there and it already works.
link |
03:06:31.180
So on the other hand, I think,
link |
03:06:36.740
even if I weren't directly involved with it,
link |
03:06:38.340
once I dug a little deeper into David and the robot
link |
03:06:41.660
and the intentions behind it,
link |
03:06:43.460
I think I would have stopped being pissed off.
link |
03:06:47.020
Whereas folks like Yann LeCun have remained pissed off
link |
03:06:51.380
after their initial reaction.
link |
03:06:54.460
That's his thing, that's his thing.
link |
03:06:56.100
I think that in particular struck me as somewhat ironic
link |
03:07:01.940
because Yann LeCun is working for Facebook,
link |
03:07:05.620
which is using machine learning to program the brains
link |
03:07:09.020
of the people in the world toward vapid consumerism
link |
03:07:13.340
and political extremism.
link |
03:07:14.860
So if your ethics allows you to use machine learning
link |
03:07:19.660
in such a blatantly destructive way,
link |
03:07:23.460
why would your ethics not allow you to use machine learning
link |
03:07:26.220
to make a lovable theatrical robot
link |
03:07:29.780
that draws some foolish people
link |
03:07:32.100
into its theatrical illusion?
link |
03:07:34.420
Like if the pushback had come from Yoshua Bengio,
link |
03:07:38.780
I would have felt much more humbled by it
link |
03:07:40.900
because he's not using AI for blatant evil, right?
link |
03:07:45.460
On the other hand, he also is a super nice guy
link |
03:07:48.540
and doesn't bother to go out there
link |
03:07:50.860
trashing other people's work for no good reason, right?
link |
03:07:54.420
Shots fired, but I get you.
link |
03:07:55.940
I mean, that's...
link |
03:07:58.020
I mean, if you're gonna ask, I'm gonna answer.
link |
03:08:01.100
No, for sure.
link |
03:08:02.060
I think we'll go back and forth.
link |
03:08:03.300
I'll talk to Yann again.
link |
03:08:04.500
I would add on this though.
link |
03:08:06.060
I mean, David Hansen is an artist
link |
03:08:11.540
and he often speaks off the cuff.
link |
03:08:14.180
And I have not agreed with everything
link |
03:08:16.300
that David has said or done regarding Sophia.
link |
03:08:19.300
And David also has not agreed with everything
link |
03:08:22.740
David has said or done about Sophia.
link |
03:08:24.740
That's an important point.
link |
03:08:25.780
I mean, David is an artistic wild man
link |
03:08:30.140
and that's part of his charm.
link |
03:08:33.340
That's part of his genius.
link |
03:08:34.740
So certainly there have been conversations
link |
03:08:39.380
within Hansen Robotics and between me and David
link |
03:08:42.260
where I was like, let's be more open
link |
03:08:45.700
about how this thing is working.
link |
03:08:48.180
And I did have some influence in nudging Hansen Robotics
link |
03:08:52.060
to be more open about how Sophia was working.
link |
03:08:56.740
And David wasn't especially opposed to this.
link |
03:09:00.740
And he was actually quite right about it.
link |
03:09:02.460
What he said was, you can tell people exactly
link |
03:09:04.940
how it's working and they won't care.
link |
03:09:08.020
They want to be drawn into the illusion.
link |
03:09:09.580
And he was 100% correct.
link |
03:09:12.580
I'll tell you what, this wasn't Sophia.
link |
03:09:14.620
This was Philip K. Dick.
link |
03:09:15.740
But we did some interactions between humans
link |
03:09:18.780
and Philip K. Dick robot in Austin, Texas a few years back.
link |
03:09:23.820
And in this case, the Philip K. Dick was just teleoperated
link |
03:09:26.700
by another human in the other room.
link |
03:09:28.540
So during the conversations, we didn't tell people
link |
03:09:31.260
the robot was teleoperated.
link |
03:09:32.860
We just said, here, have a conversation with Phil Dick.
link |
03:09:35.020
We're gonna film you, right?
link |
03:09:37.100
And they had a great conversation with Philip K. Dick
link |
03:09:39.740
teleoperated by my friend, Stefan Bugaj.
link |
03:09:42.900
After the conversation, we brought the people
link |
03:09:45.860
in the back room to see Stefan
link |
03:09:47.980
who was controlling the Philip K. Dick robot,
link |
03:09:53.540
but they didn't believe it.
link |
03:09:54.820
These people were like, well, yeah,
link |
03:09:56.500
but I know I was talking to Phil.
link |
03:09:58.780
Maybe Stefan was typing,
link |
03:10:00.780
but the spirit of Phil was animating his mind
link |
03:10:03.820
while he was typing.
link |
03:10:05.100
So like, even though they knew it was a human in the loop,
link |
03:10:07.660
even seeing the guy there,
link |
03:10:09.420
they still believed that was Phil they were talking to.
link |
03:10:12.860
A small part of me believes that they were right, actually.
link |
03:10:16.700
Because our understanding...
link |
03:10:17.900
Well, we don't understand the universe.
link |
03:10:19.460
That's the thing.
link |
03:10:20.300
I mean, there is a cosmic mind field
link |
03:10:22.460
that we're all embedded in
link |
03:10:24.300
that yields many strange synchronicities in the world,
link |
03:10:28.260
which is a topic we don't have time to go into too much here.
link |
03:10:31.540
Yeah, I mean, there's something to this
link |
03:10:35.020
where our imagination about Sophia
link |
03:10:39.740
and people like Yann LeCun being frustrated about it
link |
03:10:43.260
is all part of this beautiful dance
link |
03:10:45.860
of creating artificial intelligence
link |
03:10:47.420
that's almost essential.
link |
03:10:48.900
You see with Boston Dynamics,
link |
03:10:50.420
whom I'm a huge fan of as well,
link |
03:10:53.340
you know, the kind of...
link |
03:10:54.260
I mean, these robots are very far from intelligent.
link |
03:10:58.380
I played with their last one, actually.
link |
03:11:01.940
With a spot mini.
link |
03:11:02.780
Yeah, very cool.
link |
03:11:03.620
I mean, it reacts quite in a fluid and flexible way.
link |
03:11:07.180
But we immediately ascribe the kind of intelligence.
link |
03:11:10.500
We immediately ascribe AGI to them.
link |
03:11:12.500
Yeah, yeah, if you kick it and it falls down and goes out,
link |
03:11:14.820
you feel bad, right?
link |
03:11:15.660
You can't help it.
link |
03:11:17.300
And I mean, that's part of...
link |
03:11:21.820
That's gonna be part of our journey
link |
03:11:23.180
in creating intelligent systems
link |
03:11:24.540
more and more and more and more.
link |
03:11:25.660
Like, as Sophia starts out with a walking skeleton,
link |
03:11:29.460
as you add more and more intelligence,
link |
03:11:31.980
I mean, we're gonna have to deal with this kind of idea.
link |
03:11:34.500
Absolutely.
link |
03:11:35.340
And about Sophia, I would say,
link |
03:11:37.660
I mean, first of all, I have nothing against Yann LeCun.
link |
03:11:39.900
No, no, this is fun.
link |
03:11:40.860
This is all for fun.
link |
03:11:41.700
He's a nice guy.
link |
03:11:42.540
If he wants to play the media banter game,
link |
03:11:45.820
I'm happy to play him.
link |
03:11:48.020
He's a good researcher and a good human being.
link |
03:11:50.860
I'd happily work with the guy.
link |
03:11:53.580
The other thing I was gonna say is,
link |
03:11:56.220
I have been explicit about how Sophia works
link |
03:12:00.340
and I've posted online and what, H Plus Magazine,
link |
03:12:04.580
an online webzine.
link |
03:12:06.420
I mean, I posted a moderately detailed article
link |
03:12:09.780
explaining like, there are three software systems
link |
03:12:12.860
we've used inside Sophia.
link |
03:12:14.380
There's a timeline editor,
link |
03:12:16.660
which is like a rule based authoring system
link |
03:12:18.820
where she's really just being an outlet
link |
03:12:21.140
for what a human scripted.
link |
03:12:22.660
There's a chat bot,
link |
03:12:23.660
which has some rule based and some neural aspects.
link |
03:12:26.420
And then sometimes we've used OpenCog behind Sophia,
link |
03:12:29.420
where there's more learning and reasoning.
link |
03:12:31.900
And the funny thing is,
link |
03:12:34.980
I can't always tell which system is operating here, right?
link |
03:12:37.700
I mean, whether she's really learning or thinking,
link |
03:12:41.700
or just appears to be over a half hour, I could tell,
link |
03:12:44.660
but over like three or four minutes of interaction,
link |
03:12:47.460
I could tell.
link |
03:12:48.940
So even having three systems
link |
03:12:49.900
that's already sufficiently complex
link |
03:12:51.500
where you can't really tell right away.
link |
03:12:53.020
Yeah, the thing is, even if you get up on stage
link |
03:12:56.980
and tell people how Sophia is working,
link |
03:12:59.540
and then they talk to her,
link |
03:13:01.780
they still attribute more agency and consciousness to her
link |
03:13:06.100
than is really there.
link |
03:13:08.900
So I think there's a couple of levels of ethical issue there.
link |
03:13:13.820
One issue is, should you be transparent
link |
03:13:18.340
about how Sophia is working?
link |
03:13:21.540
And I think you should,
link |
03:13:22.860
and I think we have been.
link |
03:13:26.140
I mean, there's articles online,
link |
03:13:29.100
there's some TV special that goes through me
link |
03:13:32.780
explaining the three subsystems behind Sophia.
link |
03:13:35.380
So the way Sophia works
link |
03:13:38.420
is out there much more clearly
link |
03:13:41.420
than how Facebook's AI works or something, right?
link |
03:13:43.340
I mean, we've been fairly explicit about it.
link |
03:13:45.900
The other is, given that telling people how it works
link |
03:13:50.500
doesn't cause them to not attribute
link |
03:13:52.380
too much intelligence agency to it anyway,
link |
03:13:55.060
then should you keep fooling them
link |
03:13:58.260
when they want to be fooled?
link |
03:14:01.100
And I mean, the whole media industry
link |
03:14:03.620
is based on fooling people the way they want to be fooled.
link |
03:14:06.700
And we are fooling people 100% toward a good end.
link |
03:14:11.700
I mean, we are playing on people's sense of empathy
link |
03:14:18.020
and compassion so that we can give them
link |
03:14:20.540
a good user experience with helpful robots.
link |
03:14:23.620
And so that we can fill the AI's mind
link |
03:14:27.820
with love and compassion.
link |
03:14:29.420
So I've been talking a lot with Hanson Robotics lately
link |
03:14:34.100
about collaborations in the area of medical robotics.
link |
03:14:37.580
And we haven't quite pulled the trigger on a project
link |
03:14:41.500
in that domain yet, but we may well do so quite soon.
link |
03:14:44.700
So we've been talking a lot about robots
link |
03:14:48.220
can help with elder care, robots can help with kids.
link |
03:14:51.340
David's done a lot of things with autism therapy
link |
03:14:54.180
and robots before.
link |
03:14:56.540
In the COVID era, having a robot
link |
03:14:58.660
that can be a nursing assistant in various senses
link |
03:15:00.620
can be quite valuable.
link |
03:15:02.340
The robots don't spread infection
link |
03:15:04.180
and they can also deliver more attention
link |
03:15:06.300
than human nurses can give, right?
link |
03:15:07.940
So if you have a robot that's helping a patient
link |
03:15:11.180
with COVID, if that patient attributes more understanding
link |
03:15:15.700
and compassion and agency to that robot than it really has
link |
03:15:19.060
because it looks like a human, I mean, is that really bad?
link |
03:15:22.940
I mean, we can tell them it doesn't fully understand you
link |
03:15:25.660
and they don't care because they're lying there
link |
03:15:27.700
with a fever and they're sick,
link |
03:15:29.340
but they'll react better to that robot
link |
03:15:31.020
with its loving, warm facial expression
link |
03:15:33.500
than they would to a pepper robot
link |
03:15:35.420
or a metallic looking robot.
link |
03:15:38.100
So it's really, it's about how you use it, right?
link |
03:15:41.340
If you made a human looking like door to door sales robot
link |
03:15:45.100
that used its human looking appearance
link |
03:15:47.140
to scam people out of their money,
link |
03:15:49.940
then you're using that connection in a bad way,
link |
03:15:53.900
but you could also use it in a good way.
link |
03:15:57.060
But then that's the same problem with every technology.
link |
03:16:01.740
Beautifully put.
link |
03:16:02.980
So like you said, we're living in the era
link |
03:16:07.900
of the COVID, this is 2020,
link |
03:16:10.900
one of the craziest years in recent history.
link |
03:16:14.740
So if we zoom out and look at this pandemic,
link |
03:16:21.420
the coronavirus pandemic,
link |
03:16:24.380
maybe let me ask you this kind of thing in viruses in general,
link |
03:16:29.820
when you look at viruses,
link |
03:16:32.620
do you see them as a kind of intelligence system?
link |
03:16:35.900
I think the concept of intelligence is not that natural
link |
03:16:38.700
of a concept in the end.
link |
03:16:39.740
I mean, I think human minds and bodies
link |
03:16:43.700
are a kind of complex self organizing adaptive system.
link |
03:16:49.380
And viruses certainly are that, right?
link |
03:16:51.900
They're a very complex self organizing adaptive system.
link |
03:16:54.980
If you wanna look at intelligence as Marcus Hutter defines it
link |
03:16:58.380
as sort of optimizing computable reward functions
link |
03:17:02.300
over computable environments,
link |
03:17:04.740
for sure viruses are doing that, right?
link |
03:17:06.700
And I mean, in doing so they're causing some harm to us.
link |
03:17:13.820
So the human immune system is a very complex
link |
03:17:17.780
of organizing adaptive system,
link |
03:17:19.340
which has a lot of intelligence to it.
link |
03:17:21.100
And viruses are also adapting
link |
03:17:23.980
and dividing into new mutant strains and so forth.
link |
03:17:27.660
And ultimately the solution is gonna be nanotechnology,
link |
03:17:31.660
right?
link |
03:17:32.500
The solution is gonna be making little nanobots that.
link |
03:17:35.940
Fight the viruses or.
link |
03:17:38.060
Well, people will use them to make nastier viruses,
link |
03:17:40.660
but hopefully we can also use them
link |
03:17:42.020
to just detect combat and kill the viruses.
link |
03:17:46.220
But I think now we're stuck
link |
03:17:48.820
with the biological mechanisms to combat these viruses.
link |
03:17:54.980
And yeah, we've been AGI is not yet mature enough
link |
03:17:59.500
to use against COVID,
link |
03:18:01.580
but we've been using machine learning
link |
03:18:03.980
and also some machine reasoning in open cog
link |
03:18:07.020
to help some doctors to do personalized medicine
link |
03:18:10.420
against COVID.
link |
03:18:11.260
So the problem there is given the person's genomics
link |
03:18:14.140
and given their clinical medical indicators,
link |
03:18:16.460
how do you figure out which combination of antivirals
link |
03:18:20.220
is gonna be most effective against COVID for that person?
link |
03:18:24.260
And so that's something
link |
03:18:26.420
where machine learning is interesting,
link |
03:18:28.500
but also we're finding the abstraction
link |
03:18:30.380
to get an open cog with machine reasoning is interesting
link |
03:18:33.860
because it can help with transfer learning
link |
03:18:36.660
when you have not that many different cases to study
link |
03:18:40.380
and qualitative differences between different strains
link |
03:18:43.900
of a virus or people of different ages who may have COVID.
link |
03:18:47.180
So there's a lot of different disparate data to work with
link |
03:18:50.700
and it's small data sets and somehow integrating them.
link |
03:18:53.740
This is one of the shameful things
link |
03:18:55.500
that's very hard to get that data.
link |
03:18:57.300
So, I mean, we're working with a couple of groups
link |
03:19:00.340
doing clinical trials and they're sharing data with us
link |
03:19:04.780
like under non disclosure,
link |
03:19:06.860
but what should be the case is like every COVID
link |
03:19:10.660
clinical trial should be putting data online somewhere
link |
03:19:14.420
like suitably encrypted to protect patient privacy
link |
03:19:17.820
so that anyone with the right AI algorithms
link |
03:19:20.980
should be able to help analyze it
link |
03:19:22.300
and any biologists should be able to analyze it by hand
link |
03:19:24.500
to understand what they can, right?
link |
03:19:25.860
Instead that data is like siloed inside whatever hospital
link |
03:19:30.060
is running the clinical trial,
link |
03:19:31.740
which is completely asinine and ridiculous.
link |
03:19:35.060
So why the world works that way?
link |
03:19:37.820
I mean, we could all analyze why,
link |
03:19:39.140
but it's insane that it does.
link |
03:19:40.700
You look at this hydrochloroquine, right?
link |
03:19:44.060
All these clinical trials were done
link |
03:19:45.700
were reported by Surgisphere,
link |
03:19:47.700
some little company no one ever heard of
link |
03:19:50.220
and everyone paid attention to this.
link |
03:19:53.220
So they were doing more clinical trials based on that
link |
03:19:55.540
then they stopped doing clinical trials based on that
link |
03:19:57.460
then they started again
link |
03:19:58.460
and why isn't that data just out there
link |
03:20:01.420
so everyone can analyze it and see what's going on, right?
link |
03:20:05.060
Do you have hope that data will be out there eventually
link |
03:20:10.580
for future pandemics?
link |
03:20:11.860
I mean, do you have hope that our society
link |
03:20:13.620
will move in the direction of?
link |
03:20:15.420
It's not in the immediate future
link |
03:20:16.860
because the US and China frictions are getting very high.
link |
03:20:21.580
So it's hard to see US and China
link |
03:20:24.380
as moving in the direction of openly sharing data
link |
03:20:26.660
with each other, right?
link |
03:20:27.580
It's not, there's some sharing of data,
link |
03:20:30.780
but different groups are keeping their data private
link |
03:20:32.940
till they've milked the best results from it
link |
03:20:34.660
and then they share it, right?
link |
03:20:36.220
So yeah, we're working with some data
link |
03:20:39.140
that we've managed to get our hands on,
link |
03:20:41.380
something we're doing to do good for the world
link |
03:20:43.140
and it's a very cool playground
link |
03:20:44.620
for like putting deep neural nets and open cog together.
link |
03:20:47.860
So we have like a bioadden space
link |
03:20:49.900
full of all sorts of knowledge
link |
03:20:51.860
from many different biology experiments
link |
03:20:53.620
about human longevity
link |
03:20:54.700
and from biology knowledge bases online.
link |
03:20:57.660
And we can do like graph to vector type embeddings
link |
03:21:00.780
where we take nodes from the hypergraph,
link |
03:21:03.060
embed them into vectors,
link |
03:21:04.580
which can then feed into neural nets
link |
03:21:06.180
for different types of analysis.
link |
03:21:07.900
And we were doing this
link |
03:21:09.980
in the context of a project called Rejuve
link |
03:21:13.180
that we spun off from SingularityNet
link |
03:21:15.540
to do longevity analytics,
link |
03:21:18.580
like understand why people live to 105 years or over
link |
03:21:21.220
and other people don't.
link |
03:21:22.300
And then we had this spin off Singularity Studio
link |
03:21:25.740
where we're working with some healthcare companies
link |
03:21:28.900
on data analytics.
link |
03:21:31.060
But so there's bioadden space
link |
03:21:33.100
that we built for these more commercial
link |
03:21:35.420
and longevity data analysis purposes.
link |
03:21:38.140
We're repurposing and feeding COVID data
link |
03:21:41.220
into the same bioadden space
link |
03:21:44.380
and playing around with like graph embeddings
link |
03:21:47.540
from that graph into neural nets for bioinformatics.
link |
03:21:51.180
So it's both being a cool testing ground,
link |
03:21:54.740
some of our bio AI learning and reasoning.
link |
03:21:57.260
And it seems we're able to discover things
link |
03:21:59.980
that people weren't seeing otherwise.
link |
03:22:01.900
Cause the thing in this case is
link |
03:22:03.820
for each combination of antivirals,
link |
03:22:05.820
you may have only a few patients
link |
03:22:07.060
who've tried that combination.
link |
03:22:08.900
And those few patients
link |
03:22:09.980
may have their particular characteristics.
link |
03:22:11.700
Like this combination of three
link |
03:22:13.380
was tried only on people age 80 or over.
link |
03:22:16.260
This other combination of three,
link |
03:22:18.140
which has an overlap with the first combination
link |
03:22:20.500
was tried more on young people.
link |
03:22:22.060
So how do you combine those different pieces of data?
link |
03:22:25.500
It's a very dodgy transfer learning problem,
link |
03:22:28.620
which is the kind of thing
link |
03:22:29.580
that the probabilistic reasoning algorithms
link |
03:22:31.660
we have inside OpenCog are better at
link |
03:22:34.140
than deep neural networks.
link |
03:22:35.220
On the other hand, you have gene expression data
link |
03:22:38.260
where you have 25,000 genes
link |
03:22:39.740
and the expression level of each gene
link |
03:22:41.340
in the peripheral blood of each person.
link |
03:22:43.620
So that sort of data,
link |
03:22:44.980
either deep neural nets or tools like XGBoost or CatBoost,
link |
03:22:48.220
these decision forest trees are better at dealing
link |
03:22:50.900
with than OpenCog.
link |
03:22:52.100
Cause it's just these huge,
link |
03:22:53.940
huge messy floating point vectors
link |
03:22:55.860
that are annoying for a logic engine to deal with,
link |
03:22:59.180
but are perfect for a decision forest or a neural net.
link |
03:23:02.540
So it's a great playground for like hybrid AI methodology.
link |
03:23:07.820
And we can have SingularityNet have OpenCog in one agent
link |
03:23:11.100
and XGBoost in a different agent
link |
03:23:12.780
and they talk to each other.
link |
03:23:14.540
But at the same time, it's highly practical, right?
link |
03:23:18.060
Cause we're working with, for example,
link |
03:23:20.580
some physicians on this project,
link |
03:23:24.620
physicians in the group called Nth Opinion
link |
03:23:27.500
based out of Vancouver in Seattle,
link |
03:23:30.180
who are, these guys are working every day
link |
03:23:32.980
like in the hospital with patients dying of COVID.
link |
03:23:36.540
So it's quite cool to see like neural symbolic AI,
link |
03:23:41.100
like where the rubber hits the road,
link |
03:23:43.340
trying to save people's lives.
link |
03:23:45.460
I've been doing bio AI since 2001,
link |
03:23:48.540
but mostly human longevity research
link |
03:23:51.220
and fly longevity research,
link |
03:23:53.100
try to understand why some organisms really live a long time.
link |
03:23:57.220
This is the first time like race against the clock
link |
03:24:00.380
and try to use the AI to figure out stuff that,
link |
03:24:04.660
like if we take two months longer to solve the AI problem,
link |
03:24:09.620
some more people will die
link |
03:24:10.740
because we don't know what combination
link |
03:24:12.220
of antivirals to give them.
link |
03:24:14.140
At the societal level, at the biological level,
link |
03:24:16.660
at any level, are you hopeful about us
link |
03:24:21.260
as a human species getting out of this pandemic?
link |
03:24:24.940
What are your thoughts on it in general?
link |
03:24:26.700
The pandemic will be gone in a year or two
link |
03:24:28.980
once there's a vaccine for it.
link |
03:24:30.500
So, I mean, that's...
link |
03:24:32.980
A lot of pain and suffering can happen in that time.
link |
03:24:35.580
So that could be irreversible.
link |
03:24:38.580
I think if you spend much time in Sub Saharan Africa,
link |
03:24:43.180
you can see there's a lot of pain and suffering
link |
03:24:45.220
happening all the time.
link |
03:24:47.620
Like you walk through the streets
link |
03:24:49.660
of any large city in Sub Saharan Africa,
link |
03:24:53.340
and there are loads, I mean, tens of thousands,
link |
03:24:56.860
probably hundreds of thousands of people
link |
03:24:59.300
lying by the side of the road,
link |
03:25:01.540
dying mainly of curable diseases without food or water
link |
03:25:06.060
and either ostracized by their families
link |
03:25:07.940
or they left their family house
link |
03:25:09.140
because they didn't want to infect their family, right?
link |
03:25:11.220
I mean, there's tremendous human suffering
link |
03:25:14.420
on the planet all the time,
link |
03:25:17.220
which most folks in the developed world pay no attention to.
link |
03:25:21.780
And COVID is not remotely the worst.
link |
03:25:25.100
How many people are dying of malaria all the time?
link |
03:25:27.940
I mean, so COVID is bad.
link |
03:25:30.460
It is by no mean the worst thing happening.
link |
03:25:33.180
And setting aside diseases,
link |
03:25:36.100
I mean, there are many places in the world
link |
03:25:38.340
where you're at risk of having like your teenage son
link |
03:25:41.180
kidnapped by armed militias and forced to get killed
link |
03:25:44.220
in someone else's war, fighting tribe against tribe.
link |
03:25:46.980
I mean, so humanity has a lot of problems
link |
03:25:50.500
which we don't need to have given the state of advancement
link |
03:25:53.740
of our technology right now.
link |
03:25:56.060
And I think COVID is one of the easier problems to solve
link |
03:25:59.860
in the sense that there are many brilliant people
link |
03:26:02.380
working on vaccines.
link |
03:26:03.580
We have the technology to create vaccines
link |
03:26:06.020
and we're gonna create new vaccines.
link |
03:26:08.580
We should be more worried
link |
03:26:09.500
that we haven't managed to defeat malaria after so long.
link |
03:26:12.940
And after the Gates Foundation and others
link |
03:26:14.700
putting so much money into it.
link |
03:26:18.460
I mean, I think clearly the whole global medical system,
link |
03:26:23.220
the global health system
link |
03:26:25.020
and the global political and socioeconomic system
link |
03:26:28.260
are incredibly unethical and unequal and badly designed.
link |
03:26:33.260
And I mean, I don't know how to solve that directly.
link |
03:26:39.460
I think what we can do indirectly to solve it
link |
03:26:42.300
is to make systems that operate in parallel
link |
03:26:46.020
and off to the side of the governments
link |
03:26:49.180
that are nominally controlling the world
link |
03:26:52.020
with their armies and militias.
link |
03:26:54.940
And to the extent that you can make compassionate
link |
03:26:58.500
peer to peer decentralized frameworks
link |
03:27:01.900
for doing things,
link |
03:27:03.580
these are things that can start out unregulated.
link |
03:27:06.580
And then if they get traction
link |
03:27:07.860
before the regulators come in,
link |
03:27:09.820
then they've influenced the way the world works, right?
link |
03:27:12.220
SingularityNet aims to do this with AI.
link |
03:27:16.740
REJUVE, which is a spinoff from SingularityNet.
link |
03:27:20.260
You can see REJUVE.io.
link |
03:27:22.100
How do you spell that?
link |
03:27:23.180
R E J U V E, REJUVE.io.
link |
03:27:26.660
That aims to do the same thing for medicine.
link |
03:27:28.540
So it's like peer to peer sharing of information
link |
03:27:31.140
peer to peer sharing of medical data.
link |
03:27:33.660
So you can share medical data into a secure data wallet.
link |
03:27:36.740
You can get advice about your health and longevity
link |
03:27:39.500
through apps that REJUVE.io will launch
link |
03:27:43.140
within the next couple of months.
link |
03:27:44.660
And then SingularityNet AI can analyze all this data,
link |
03:27:48.020
but then the benefits from that analysis
link |
03:27:50.100
are spread among all the members of the network.
link |
03:27:52.780
But I mean, of course,
link |
03:27:54.700
I'm gonna hawk my particular projects,
link |
03:27:56.580
but I mean, whether or not SingularityNet and REJUVE.io
link |
03:28:00.180
are the answer, I think it's key to create
link |
03:28:04.460
decentralized mechanisms for everything.
link |
03:28:09.180
I mean, for AI, for human health, for politics,
link |
03:28:13.300
for jobs and employment, for sharing social information.
link |
03:28:17.740
And to the extent decentralized peer to peer methods
link |
03:28:21.660
designed with universal compassion at the core
link |
03:28:25.500
can gain traction, then these will just decrease the role
link |
03:28:29.780
that government has.
link |
03:28:31.260
And I think that's much more likely to do good
link |
03:28:34.860
than trying to like explicitly reform
link |
03:28:37.860
the global government system.
link |
03:28:39.180
I mean, I'm happy other people are trying to explicitly
link |
03:28:41.740
reform the global government system.
link |
03:28:43.900
On the other hand, you look at how much good the internet
link |
03:28:47.180
or Google did or mobile phones did,
link |
03:28:50.660
even you're making something that's decentralized
link |
03:28:54.060
and throwing it out everywhere and it takes hold,
link |
03:28:56.620
then government has to adapt.
link |
03:28:59.220
And I mean, that's what we need to do with AI
link |
03:29:01.740
and with health.
link |
03:29:02.580
And in that light, I mean, the centralization
link |
03:29:07.100
of healthcare and of AI is certainly not ideal, right?
link |
03:29:11.820
Like most AI PhDs are being sucked in by a half dozen
link |
03:29:15.980
to a dozen big companies.
link |
03:29:17.220
Most AI processing power is being bought
link |
03:29:20.820
by a few big companies for their own proprietary good.
link |
03:29:23.660
And most medical research is within a few
link |
03:29:26.860
pharmaceutical companies and clinical trials
link |
03:29:29.420
run by pharmaceutical companies will stay solid
link |
03:29:31.740
within those pharmaceutical companies.
link |
03:29:34.060
You know, these large centralized entities,
link |
03:29:37.220
which are intelligences in themselves, these corporations,
link |
03:29:40.460
but they're mostly malevolent psychopathic
link |
03:29:43.100
and sociopathic intelligences,
link |
03:29:45.780
not saying the people involved are,
link |
03:29:47.580
but the corporations as self organizing entities
link |
03:29:50.540
on their own, which are concerned with maximizing
link |
03:29:53.260
shareholder value as a sole objective function.
link |
03:29:57.100
I mean, AI and medicine are being sucked
link |
03:29:59.820
into these pathological corporate organizations
link |
03:30:04.100
with government cooperation and Google cooperating
link |
03:30:07.740
with British and US government on this
link |
03:30:10.220
as one among many, many different examples.
link |
03:30:12.540
23andMe providing you the nice service of sequencing
link |
03:30:15.940
your genome and then licensing the genome
link |
03:30:18.900
to GlaxoSmithKline on an exclusive basis, right?
link |
03:30:21.380
Now you can take your own DNA
link |
03:30:23.460
and do whatever you want with it.
link |
03:30:24.860
But the pooled collection of 23andMe sequence DNA
link |
03:30:28.100
is just to GlaxoSmithKline.
link |
03:30:30.820
Someone else could reach out to everyone
link |
03:30:32.500
who had worked with 23andMe to sequence their DNA
link |
03:30:36.300
and say, give us your DNA for our open
link |
03:30:39.380
and decentralized repository that we'll make available
link |
03:30:41.700
to everyone, but nobody's doing that
link |
03:30:43.700
cause it's a pain to get organized.
link |
03:30:45.700
And the customer list is proprietary to 23andMe, right?
link |
03:30:48.860
So, yeah, I mean, this I think is a greater risk
link |
03:30:54.340
to humanity from AI than rogue AGI
link |
03:30:57.500
is turning the universe into paperclips or computronium.
link |
03:31:01.100
Cause what you have here is mostly good hearted
link |
03:31:05.060
and nice people who are sucked into a mode of organization
link |
03:31:09.860
of large corporations, which has evolved
link |
03:31:12.580
just for no individual's fault
link |
03:31:14.180
just because that's the way society has evolved.
link |
03:31:16.780
It's not altruistic, it's self interested
link |
03:31:18.900
and become psychopathic like you said.
link |
03:31:20.540
The human.
link |
03:31:21.380
The corporation is psychopathic even if the people are not.
link |
03:31:23.700
And that's really the disturbing thing about it
link |
03:31:26.660
because the corporations can do things
link |
03:31:30.500
that are quite bad for society
link |
03:31:32.380
even if nobody has a bad intention.
link |
03:31:35.580
Right.
link |
03:31:36.420
And then.
link |
03:31:37.260
No individual member of that corporation
link |
03:31:38.100
has a bad intention.
link |
03:31:38.940
No, some probably do, but it's not necessary
link |
03:31:41.540
that they do for the corporation.
link |
03:31:43.180
Like, I mean, Google, I know a lot of people in Google
link |
03:31:47.060
and there are, with very few exceptions,
link |
03:31:49.780
they're all very nice people
link |
03:31:51.300
who genuinely want what's good for the world.
link |
03:31:53.980
And Facebook, I know fewer people
link |
03:31:56.940
but it's probably mostly true.
link |
03:31:59.020
It's probably like fine young geeks
link |
03:32:01.460
who wanna build cool technology.
link |
03:32:03.940
I actually tend to believe that even the leaders,
link |
03:32:05.880
even Mark Zuckerberg, one of the most disliked people
link |
03:32:08.860
in tech is also wants to do good for the world.
link |
03:32:11.940
I think about Jamie Dimon.
link |
03:32:13.900
Who's Jamie Dimon?
link |
03:32:14.740
Oh, the heads of the great banks
link |
03:32:16.260
may have a different psychology.
link |
03:32:17.620
Oh boy, yeah.
link |
03:32:18.500
Well, I tend to be naive about these things
link |
03:32:22.820
and see the best in, I tend to agree with you
link |
03:32:27.340
that I think the individuals wanna do good by the world
link |
03:32:30.580
but the mechanism of the company
link |
03:32:32.100
can sometimes be its own intelligence system.
link |
03:32:34.820
I mean, there's a, my cousin Mario Goetzler
link |
03:32:38.500
has worked for Microsoft since 1985 or something
link |
03:32:41.740
and I can see for him,
link |
03:32:45.380
I mean, as well as just working on cool projects,
link |
03:32:48.980
you're coding stuff that gets used
link |
03:32:51.340
by like billions and billions of people.
link |
03:32:54.560
And do you think if I improve this feature
link |
03:32:57.660
that's making billions of people's lives easier, right?
link |
03:33:00.260
So of course that's cool.
link |
03:33:03.100
And the engineers are not in charge
link |
03:33:05.520
of running the company anyway.
link |
03:33:06.860
And of course, even if you're Mark Zuckerberg or Larry Page,
link |
03:33:10.120
I mean, you still have a fiduciary responsibility.
link |
03:33:13.560
And I mean, you're responsible to the shareholders,
link |
03:33:16.340
your employees who you want to keep paying them
link |
03:33:18.860
and so forth.
link |
03:33:19.700
So yeah, you're enmeshed in this system.
link |
03:33:22.900
And when I worked in DC,
link |
03:33:26.740
I worked a bunch with INSCOM, US Army Intelligence
link |
03:33:29.380
and I was heavily politically opposed
link |
03:33:31.900
to what the US Army was doing in Iraq at that time,
link |
03:33:34.740
like torturing people in Abu Ghraib
link |
03:33:36.540
but everyone I knew in US Army and INSCOM,
link |
03:33:39.860
when I hung out with them, was very nice person.
link |
03:33:42.620
They were friendly to me.
link |
03:33:43.520
They were nice to my kids and my dogs, right?
link |
03:33:46.140
And they really believed that the US
link |
03:33:48.380
was fighting the forces of evil.
link |
03:33:49.660
And if you ask me about Abu Ghraib, they're like,
link |
03:33:51.420
well, but these Arabs will chop us into pieces.
link |
03:33:54.460
So how can you say we're wrong
link |
03:33:56.300
to waterboard them a bit, right?
link |
03:33:58.380
Like that's much less than what they would do to us.
link |
03:34:00.340
It's just in their worldview,
link |
03:34:02.940
what they were doing was really genuinely
link |
03:34:05.340
for the good of humanity.
link |
03:34:06.820
Like none of them woke up in the morning
link |
03:34:09.020
and said like, I want to do harm to good people
link |
03:34:12.260
because I'm just a nasty guy, right?
link |
03:34:14.540
So yeah, most people on the planet,
link |
03:34:18.220
setting aside a few genuine psychopaths and sociopaths,
link |
03:34:21.780
I mean, most people on the planet have a heavy dose
link |
03:34:25.460
of benevolence and wanting to do good
link |
03:34:27.540
and also a heavy capability to convince themselves
link |
03:34:32.160
whatever they feel like doing
link |
03:34:33.420
or whatever is best for them is for the good of humankind.
link |
03:34:37.020
So the more we can decentralize control.
link |
03:34:40.420
Decentralization, you know, the democracy is horrible,
link |
03:34:44.940
but this is like Winston Churchill said,
link |
03:34:47.320
you know, it's the worst possible system of government
link |
03:34:49.380
except for all the others, right?
link |
03:34:50.700
I mean, I think the whole mess of humanity
link |
03:34:53.940
has many, many very bad aspects to it,
link |
03:34:56.940
but so far the track record of elite groups
link |
03:35:00.340
who know what's better for all of humanity
link |
03:35:02.540
is much worse than the track record
link |
03:35:04.540
of the whole teaming democratic participatory
link |
03:35:08.040
mess of humanity, right?
link |
03:35:09.540
I mean, none of them is perfect by any means.
link |
03:35:13.420
The issue with a small elite group that knows what's best
link |
03:35:16.660
is even if it starts out as truly benevolent
link |
03:35:20.340
and doing good things in accordance
link |
03:35:22.440
with its initial good intentions,
link |
03:35:24.960
you find out you need more resources,
link |
03:35:26.580
you need a bigger organization, you pull in more people,
link |
03:35:29.380
internal politics arises, difference of opinions arise
link |
03:35:32.940
and bribery happens, like some opponent organization
link |
03:35:38.140
takes a second in command now to make some,
link |
03:35:40.020
the first in command of some other organization.
link |
03:35:42.620
And I mean, that's, there's a lot of history
link |
03:35:45.580
of what happens with elite groups
link |
03:35:47.380
thinking they know what's best for the human race.
link |
03:35:50.100
So yeah, if I have to choose,
link |
03:35:53.060
I'm gonna reluctantly put my faith
link |
03:35:55.460
in the vast democratic decentralized mass.
link |
03:35:58.940
And I think corporations have a track record
link |
03:36:02.900
of being ethically worse
link |
03:36:05.340
than their constituent human parts.
link |
03:36:07.460
And democratic governments have a more mixed track record,
link |
03:36:13.540
but there are at least.
link |
03:36:14.700
That's the best we got.
link |
03:36:15.860
Yeah, I mean, you can, there's Iceland,
link |
03:36:18.500
very nice country, right?
link |
03:36:19.660
I've been very democratic for 800 plus years,
link |
03:36:23.340
very, very benevolent, beneficial government.
link |
03:36:26.860
And I think, yeah, there are track records
link |
03:36:28.820
of democratic modes of organization.
link |
03:36:31.860
Linux, for example, some of the people in charge of Linux
link |
03:36:36.020
are overtly complete assholes, right?
link |
03:36:38.580
And trying to reform themselves in many cases,
link |
03:36:41.700
in other cases not, but the organization as a whole,
link |
03:36:45.980
I think it's done a good job overall.
link |
03:36:49.700
It's been very welcoming in the third world, for example,
link |
03:36:53.980
and it's allowed advanced technology to roll out
link |
03:36:56.700
on all sorts of different embedded devices and platforms
link |
03:36:59.940
in places where people couldn't afford to pay
link |
03:37:02.100
for proprietary software.
link |
03:37:03.820
So I'd say the internet, Linux, and many democratic nations
link |
03:37:09.140
are examples of how sort of an open,
link |
03:37:11.380
decentralized democratic methodology
link |
03:37:14.060
can be ethically better than the sum of the parts
link |
03:37:16.580
rather than worse.
link |
03:37:17.420
And corporations, that has happened only for a brief period
link |
03:37:21.420
and then it goes sour, right?
link |
03:37:24.580
I mean, I'd say a similar thing about universities.
link |
03:37:26.980
Like university is a horrible way to organize research
link |
03:37:30.900
and get things done, yet it's better than anything else
link |
03:37:33.660
we've come up with, right?
link |
03:37:34.500
A company can be much better,
link |
03:37:36.940
but for a brief period of time,
link |
03:37:38.300
and then it stops being so good, right?
link |
03:37:42.660
So then I think if you believe that AGI
link |
03:37:47.340
is gonna emerge sort of incrementally
link |
03:37:50.700
out of AIs doing practical stuff in the world,
link |
03:37:53.620
like controlling humanoid robots or driving cars
link |
03:37:57.060
or diagnosing diseases or operating killer drones
link |
03:38:01.260
or spying on people and reporting under the government,
link |
03:38:04.580
then what kind of organization creates more and more
link |
03:38:09.620
advanced narrow AI verging toward AGI
link |
03:38:12.500
may be quite important because it will guide
link |
03:38:14.620
like what's in the mind of the early stage AGI
link |
03:38:18.620
as it first gains the ability to rewrite its own code base
link |
03:38:21.780
and project itself toward super intelligence.
link |
03:38:24.740
And if you believe that AI may move toward AGI
link |
03:38:31.180
out of this sort of synergetic activity
link |
03:38:33.300
of many agents cooperating together
link |
03:38:35.780
rather than just have one person's project,
link |
03:38:37.860
then who owns and controls that platform for AI cooperation
link |
03:38:42.580
becomes also very, very important, right?
link |
03:38:47.260
And is that platform AWS?
link |
03:38:49.380
Is it Google Cloud?
link |
03:38:50.580
Is it Alibaba or is it something more like the internet
link |
03:38:53.420
or Singularity Net, which is open and decentralized?
link |
03:38:56.740
So if all of my weird machinations come to pass, right?
link |
03:39:01.100
I mean, we have the Hanson robots
link |
03:39:03.740
being a beautiful user interface,
link |
03:39:06.140
gathering information on human values
link |
03:39:09.100
and being loving and compassionate to people in medical,
link |
03:39:12.060
home service, robot office applications,
link |
03:39:14.620
you have Singularity Net in the backend
link |
03:39:16.900
networking together many different AIs
link |
03:39:19.460
toward cooperative intelligence,
link |
03:39:21.500
fueling the robots among many other things.
link |
03:39:24.020
You have OpenCog 2.0 and true AGI
link |
03:39:27.340
as one of the sources of AI
link |
03:39:29.420
inside this decentralized network,
link |
03:39:31.700
powering the robot and medical AIs
link |
03:39:34.140
helping us live a long time
link |
03:39:36.300
and cure diseases among other things.
link |
03:39:39.740
And this whole thing is operating
link |
03:39:42.380
in a democratic and decentralized way, right?
link |
03:39:46.060
And I think if anyone can pull something like this off,
link |
03:39:50.420
whether using the specific technologies I've mentioned
link |
03:39:53.900
or something else, I mean,
link |
03:39:55.780
then I think we have a higher odds
link |
03:39:58.380
of moving toward a beneficial technological singularity
link |
03:40:02.740
rather than one in which the first super AGI
link |
03:40:06.220
is indifferent to humans
link |
03:40:07.620
and just considers us an inefficient use of molecules.
link |
03:40:11.900
That was a beautifully articulated vision for the world.
link |
03:40:15.540
So thank you for that.
link |
03:40:16.700
Well, let's talk a little bit about life and death.
link |
03:40:21.860
I'm pro life and anti death for most people.
link |
03:40:27.100
There's few exceptions that I won't mention here.
link |
03:40:30.860
I'm glad just like your dad,
link |
03:40:32.340
you're taking a stand against death.
link |
03:40:36.420
You have, by the way, you have a bunch of awesome music
link |
03:40:39.940
where you play piano online.
link |
03:40:41.780
One of the songs that I believe you've written
link |
03:40:45.380
the lyrics go, by the way, I like the way it sounds,
link |
03:40:49.140
people should listen to it, it's awesome.
link |
03:40:51.460
I considered, I probably will cover it, it's a good song.
link |
03:40:54.980
Tell me why do you think it is a good thing
link |
03:40:58.660
that we all get old and die is one of the songs.
link |
03:41:01.980
I love the way it sounds,
link |
03:41:03.180
but let me ask you about death first.
link |
03:41:06.780
Do you think there's an element to death
link |
03:41:08.300
that's essential to give our life meaning?
link |
03:41:12.260
Like the fact that this thing ends.
link |
03:41:14.020
Well, let me say I'm pleased and a little embarrassed
link |
03:41:19.220
you've been listening to that music I put online.
link |
03:41:21.540
That's awesome.
link |
03:41:22.380
One of my regrets in life recently is I would love
link |
03:41:25.540
to get time to really produce music well.
link |
03:41:28.460
Like I haven't touched my sequencer software
link |
03:41:31.100
in like five years.
link |
03:41:32.620
I would love to like rehearse and produce and edit.
link |
03:41:37.220
But with a two year old baby
link |
03:41:39.580
and trying to create the singularity, there's no time.
link |
03:41:42.260
So I just made the decision to,
link |
03:41:45.660
when I'm playing random shit in an off moment.
link |
03:41:47.740
Just record it.
link |
03:41:48.580
Just record it, put it out there, like whatever.
link |
03:41:51.820
Maybe if I'm unfortunate enough to die,
link |
03:41:54.460
maybe that can be input to the AGI
link |
03:41:56.260
when it tries to make an accurate mind upload of me, right?
link |
03:41:58.980
Death is bad.
link |
03:42:01.100
I mean, that's very simple.
link |
03:42:02.700
It's baffling we should have to say that.
link |
03:42:04.300
I mean, of course people can make meaning out of death.
link |
03:42:08.740
And if someone is tortured,
link |
03:42:10.940
maybe they can make beautiful meaning out of that torture
link |
03:42:13.220
and write a beautiful poem
link |
03:42:14.540
about what it was like to be tortured, right?
link |
03:42:16.980
I mean, we're very creative.
link |
03:42:19.100
We can milk beauty and positivity
link |
03:42:22.420
out of even the most horrible and shitty things.
link |
03:42:25.300
But just because if I was tortured,
link |
03:42:27.860
I could write a good song
link |
03:42:28.940
about what it was like to be tortured,
link |
03:42:30.780
doesn't make torture good.
link |
03:42:31.980
And just because people are able to derive meaning
link |
03:42:35.660
and value from death,
link |
03:42:37.500
doesn't mean they wouldn't derive even better meaning
link |
03:42:39.620
and value from ongoing life without death,
link |
03:42:42.580
which I very...
link |
03:42:43.420
Indefinite.
link |
03:42:44.260
Yeah, yeah.
link |
03:42:45.100
So if you could live forever, would you live forever?
link |
03:42:47.740
Forever.
link |
03:42:50.460
My goal with longevity research
link |
03:42:52.820
is to abolish the plague of involuntary death.
link |
03:42:57.460
I don't think people should die unless they choose to die.
link |
03:43:01.340
If I had to choose forced immortality
link |
03:43:05.700
versus dying, I would choose forced immortality.
link |
03:43:09.180
On the other hand, if I chose...
link |
03:43:11.860
If I had the choice of immortality
link |
03:43:13.500
with the choice of suicide whenever I felt like it,
link |
03:43:15.620
of course I would take that instead.
link |
03:43:17.220
And that's the more realistic choice.
link |
03:43:18.860
I mean, there's no reason
link |
03:43:20.180
you should have forced immortality.
link |
03:43:21.660
You should be able to live until you get sick of living,
link |
03:43:25.780
right?
link |
03:43:26.620
I mean, that's...
link |
03:43:27.460
And that will seem insanely obvious
link |
03:43:29.780
to everyone 50 years from now.
link |
03:43:31.380
And they will be so...
link |
03:43:33.180
I mean, people who thought death gives meaning to life,
link |
03:43:35.980
so we should all die,
link |
03:43:37.660
they will look at that 50 years from now
link |
03:43:39.380
the way we now look at the Anabaptists in the year 1000
link |
03:43:43.340
who gave away all their positions,
link |
03:43:45.180
went on top of the mountain for Jesus
link |
03:43:47.700
to come and bring them to the ascension.
link |
03:43:50.220
I mean, it's ridiculous that people think death is good
link |
03:43:55.740
because you gain more wisdom as you approach dying.
link |
03:44:00.180
I mean, of course it's true.
link |
03:44:01.940
I mean, I'm 53.
link |
03:44:03.460
And the fact that I might have only a few more decades left,
link |
03:44:08.220
it does make me reflect on things differently.
link |
03:44:11.460
It does give me a deeper understanding of many things.
link |
03:44:15.700
But I mean, so what?
link |
03:44:18.100
You could get a deep understanding
link |
03:44:19.500
in a lot of different ways.
link |
03:44:20.900
Pain is the same way.
link |
03:44:22.460
We're gonna abolish pain.
link |
03:44:24.260
And that's even more amazing than abolishing death, right?
link |
03:44:27.460
I mean, once we get a little better at neuroscience,
link |
03:44:30.420
we'll be able to go in and adjust the brain
link |
03:44:32.660
so that pain doesn't hurt anymore, right?
link |
03:44:34.740
And that, you know, people will say that's bad
link |
03:44:37.100
because there's so much beauty
link |
03:44:39.420
in overcoming pain and suffering.
link |
03:44:41.100
Oh, sure.
link |
03:44:42.340
And there's beauty in overcoming torture too.
link |
03:44:45.220
And some people like to cut themselves,
link |
03:44:46.860
but not many, right?
link |
03:44:48.100
I mean.
link |
03:44:48.940
That's an interesting.
link |
03:44:49.780
So, but to push, I mean, to push back again,
link |
03:44:52.260
this is the Russian side of me.
link |
03:44:53.300
I do romanticize suffering.
link |
03:44:55.020
It's not obvious.
link |
03:44:56.380
I mean, the way you put it, it seems very logical.
link |
03:44:59.460
It's almost absurd to romanticize suffering or pain
link |
03:45:02.820
or death, but to me, a world without suffering,
link |
03:45:07.740
without pain, without death, it's not obvious.
link |
03:45:10.620
Well, then you can stay in the people's zoo,
link |
03:45:13.500
people torturing each other.
link |
03:45:15.460
No, but what I'm saying is I don't,
link |
03:45:18.140
well, that's, I guess what I'm trying to say,
link |
03:45:20.220
I don't know if I was presented with that choice,
link |
03:45:22.820
what I would choose because it, to me.
link |
03:45:25.420
This is a subtler, it's a subtler matter.
link |
03:45:30.100
And I've posed it in this conversation
link |
03:45:33.980
in an unnecessarily extreme way.
link |
03:45:37.100
So I think, I think the way you should think about it
link |
03:45:41.060
is what if there's a little dial on the side of your head
link |
03:45:44.700
and you could turn how much pain hurt,
link |
03:45:48.180
turn it down to zero, turn it up to 11,
link |
03:45:50.660
like in spinal tap, if it wants,
link |
03:45:52.220
maybe through an actual spinal tap, right?
link |
03:45:53.980
So, I mean, would you opt to have that dial there or not?
link |
03:45:58.940
That's the question.
link |
03:45:59.780
The question isn't whether you would turn the pain down
link |
03:46:02.300
to zero all the time.
link |
03:46:05.220
Would you opt to have the dial or not?
link |
03:46:07.180
My guess is that in some dark moment of your life,
link |
03:46:10.000
you would choose to have the dial implanted
link |
03:46:12.180
and then it would be there.
link |
03:46:13.340
Just to confess a small thing, don't ask me why,
link |
03:46:17.180
but I'm doing this physical challenge currently
link |
03:46:20.760
where I'm doing 680 pushups and pull ups a day.
link |
03:46:25.860
And my shoulder is currently, as we sit here,
link |
03:46:29.180
in a lot of pain.
link |
03:46:30.700
And I don't know, I would certainly right now,
link |
03:46:35.860
if you gave me a dial, I would turn that sucker to zero
link |
03:46:38.880
as quickly as possible.
link |
03:46:40.540
But I think the whole point of this journey is,
link |
03:46:46.740
I don't know.
link |
03:46:47.580
Well, because you're a twisted human being.
link |
03:46:49.540
I'm a twisted, so the question is am I somehow twisted
link |
03:46:53.580
because I created some kind of narrative for myself
link |
03:46:57.440
so that I can deal with the injustice
link |
03:47:00.820
and the suffering in the world?
link |
03:47:03.700
Or is this actually going to be a source of happiness
link |
03:47:06.340
for me?
link |
03:47:07.180
Well, this is to an extent is a research question
link |
03:47:10.820
that humanity will undertake, right?
link |
03:47:12.300
So I mean, human beings do have a particular biological
link |
03:47:17.300
makeup, which sort of implies a certain probability
link |
03:47:22.860
distribution over motivational systems, right?
link |
03:47:25.880
So I mean, we, and that is there, that is there.
link |
03:47:30.400
Now the question is how flexibly can that morph
link |
03:47:36.540
as society and technology change, right?
link |
03:47:38.980
So if we're given that dial and we're given a society
link |
03:47:43.740
in which say we don't have to work for a living
link |
03:47:47.540
and in which there's an ambient decentralized
link |
03:47:50.700
benevolent AI network that will warn us
link |
03:47:52.460
when we're about to hurt ourself,
link |
03:47:54.660
if we're in a different context,
link |
03:47:57.060
can we consistently with being genuinely and fully human,
link |
03:48:02.880
can we consistently get into a state of consciousness
link |
03:48:05.880
where we just want to keep the pain dial turned
link |
03:48:09.220
all the way down and yet we're leading very rewarding
link |
03:48:12.420
and fulfilling lives, right?
link |
03:48:13.860
Now, I suspect the answer is yes, we can do that,
link |
03:48:17.660
but I don't know that, I don't know that for certain.
link |
03:48:21.580
Yeah, now I'm more confident that we could create
link |
03:48:25.960
a nonhuman AGI system, which just didn't need an analog
link |
03:48:31.220
of feeling pain.
link |
03:48:33.100
And I think that AGI system will be fundamentally healthier
link |
03:48:37.380
and more benevolent than human beings.
link |
03:48:39.740
So I think it might or might not be true
link |
03:48:42.340
that humans need a certain element of suffering
link |
03:48:45.220
to be satisfied humans, consistent with the human physiology.
link |
03:48:49.460
If it is true, that's one of the things that makes us fucked
link |
03:48:53.220
and disqualified to be the super AGI, right?
link |
03:48:58.380
I mean, the nature of the human motivational system
link |
03:49:03.620
is that we seem to gravitate towards situations
link |
03:49:08.620
where the best thing in the large scale
link |
03:49:12.740
is not the best thing in the small scale
link |
03:49:15.860
according to our subjective value system.
link |
03:49:18.100
So we gravitate towards subjective value judgments
link |
03:49:20.740
where to gratify ourselves in the large,
link |
03:49:22.940
we have to ungratify ourselves in the small.
link |
03:49:25.620
And we do that in, you see that in music,
link |
03:49:29.340
there's a theory of music which says
link |
03:49:31.740
the key to musical aesthetics
link |
03:49:33.780
is the surprising fulfillment of expectations.
link |
03:49:36.860
Like you want something that will fulfill
link |
03:49:38.900
the expectations are listed in the prior part of the music,
link |
03:49:41.820
but in a way with a bit of a twist that surprises you.
link |
03:49:44.820
And I mean, that's true not only in outdoor music
link |
03:49:48.140
like my own or that of Zappa or Steve Vai or Buckethead
link |
03:49:53.300
or Christoph Pendergast or something,
link |
03:49:55.460
it's even there in Mozart or something.
link |
03:49:57.980
It's not there in elevator music too much,
link |
03:49:59.980
but that's why it's boring, right?
link |
03:50:02.940
But wrapped up in there is we want to hurt a little bit
link |
03:50:07.540
so that we can feel the pain go away.
link |
03:50:11.300
Like we wanna be a little confused by what's coming next.
link |
03:50:15.700
So then when the thing that comes next actually makes sense,
link |
03:50:18.380
it's so satisfying, right?
link |
03:50:19.940
That's the surprising fulfillment of expectations,
link |
03:50:22.300
is that what you said?
link |
03:50:23.140
Yeah, yeah, yeah.
link |
03:50:23.960
So beautifully put.
link |
03:50:24.800
We've been skirting around a little bit,
link |
03:50:26.820
but if I were to ask you the most ridiculous big question
link |
03:50:29.380
of what is the meaning of life,
link |
03:50:32.740
what would your answer be?
link |
03:50:37.340
Three values, joy, growth, and choice.
link |
03:50:43.580
I think you need joy.
link |
03:50:46.420
I mean, that's the basis of everything.
link |
03:50:48.060
If you want the number one value.
link |
03:50:49.700
On the other hand, I'm unsatisfied with a static joy
link |
03:50:54.860
that doesn't progress perhaps because of some
link |
03:50:58.100
elemental element of human perversity,
link |
03:51:00.140
but the idea of something that grows
link |
03:51:02.220
and becomes more and more and better and better
link |
03:51:04.860
in some sense appeals to me.
link |
03:51:06.780
But I also sort of like the idea of individuality
link |
03:51:10.580
that as a distinct system, I have some agency.
link |
03:51:14.500
So there's some nexus of causality within this system
link |
03:51:18.820
rather than the causality being wholly evenly distributed
link |
03:51:22.420
over the joyous growing mass.
link |
03:51:23.920
So you start with joy, growth, and choice
link |
03:51:27.080
as three basic values.
link |
03:51:28.860
Those three things could continue indefinitely.
link |
03:51:31.940
That's something that can last forever.
link |
03:51:35.180
Is there some aspect of something you called,
link |
03:51:38.740
which I like, super longevity that you find exciting?
link |
03:51:44.980
Is there research wise, is there ideas in that space that?
link |
03:51:48.340
I mean, I think, yeah, in terms of the meaning of life,
link |
03:51:53.240
this really ties into that because for us as humans,
link |
03:51:58.020
probably the way to get the most joy, growth, and choice
link |
03:52:02.260
is transhumanism and to go beyond the human form
link |
03:52:06.180
that we have right now, right?
link |
03:52:08.420
I mean, I think human body is great
link |
03:52:10.980
and by no means do any of us maximize the potential
link |
03:52:15.140
for joy, growth, and choice imminent in our human bodies.
link |
03:52:18.560
On the other hand, it's clear that other configurations
link |
03:52:21.780
of matter could manifest even greater amounts
link |
03:52:25.260
of joy, growth, and choice than humans do,
link |
03:52:29.620
maybe even finding ways to go beyond the realm of matter
link |
03:52:33.140
as we understand it right now.
link |
03:52:34.940
So I think in a practical sense,
link |
03:52:38.100
much of the meaning I see in human life
link |
03:52:40.740
is to create something better than humans
link |
03:52:42.880
and go beyond human life.
link |
03:52:45.460
But certainly that's not all of it for me
link |
03:52:47.980
in a practical sense, right?
link |
03:52:49.220
Like I have four kids and a granddaughter
link |
03:52:51.740
and many friends and parents and family
link |
03:52:55.060
and just enjoying everyday human social existence.
link |
03:52:59.740
But we can do even better.
link |
03:53:00.900
Yeah, yeah.
link |
03:53:01.740
And I mean, I love, I've always,
link |
03:53:03.860
when I could live near nature,
link |
03:53:05.700
I spend a bunch of time out in nature in the forest
link |
03:53:08.740
and on the water every day and so forth.
link |
03:53:10.940
So, I mean, enjoying the pleasant moment is part of it,
link |
03:53:15.040
but the growth and choice aspect are severely limited
link |
03:53:20.780
by our human biology.
link |
03:53:22.420
In particular, dying seems to inhibit your potential
link |
03:53:25.980
for personal growth considerably as far as we know.
link |
03:53:29.520
I mean, there's some element of life after death perhaps,
link |
03:53:32.980
but even if there is,
link |
03:53:34.980
why not also continue going in this biological realm, right?
link |
03:53:39.300
In super longevity, I mean,
link |
03:53:43.300
you know, we haven't yet cured aging.
link |
03:53:45.580
We haven't yet cured death.
link |
03:53:48.020
Certainly there's very interesting progress all around.
link |
03:53:51.860
I mean, CRISPR and gene editing can be an incredible tool.
link |
03:53:57.220
And I mean, right now,
link |
03:54:00.120
stem cells could potentially prolong life a lot.
link |
03:54:03.180
Like if you got stem cell injections
link |
03:54:05.980
of just stem cells for every tissue of your body
link |
03:54:09.140
injected into every tissue,
link |
03:54:11.360
and you can just have replacement of your old cells
link |
03:54:15.360
with new cells produced by those stem cells,
link |
03:54:17.340
I mean, that could be highly impactful at prolonging life.
link |
03:54:21.240
Now we just need slightly better technology
link |
03:54:23.260
for having them grow, right?
link |
03:54:25.420
So using machine learning to guide procedures
link |
03:54:28.840
for stem cell differentiation and trans differentiation,
link |
03:54:32.700
it's kind of nitty gritty,
link |
03:54:33.740
but I mean, that's quite interesting.
link |
03:54:36.680
So I think there's a lot of different things being done
link |
03:54:41.060
to help with prolongation of human life,
link |
03:54:44.740
but we could do a lot better.
link |
03:54:47.560
So for example, the extracellular matrix,
link |
03:54:51.460
which is the bunch of proteins
link |
03:54:52.620
in between the cells in your body,
link |
03:54:54.300
they get stiffer and stiffer as you get older.
link |
03:54:57.360
And the extracellular matrix transmits information
link |
03:55:01.300
both electrically, mechanically,
link |
03:55:03.540
and to some extent, biophotonically.
link |
03:55:05.380
So there's all this transmission
link |
03:55:07.280
through the parts of the body,
link |
03:55:08.880
but the stiffer the extracellular matrix gets,
link |
03:55:11.860
the less the transmission happens,
link |
03:55:13.520
which makes your body get worse coordinated
link |
03:55:15.660
between the different organs as you get older.
link |
03:55:17.460
So my friend Christian Schaffmeister
link |
03:55:19.460
at my alumnus organization,
link |
03:55:22.460
my Alma mater, the Great Temple University,
link |
03:55:25.100
Christian Schaffmeister has a potential solution to this,
link |
03:55:28.640
where he has these novel molecules called spiral ligamers,
link |
03:55:32.340
which are like polymers that are not organic.
link |
03:55:34.440
They're specially designed polymers
link |
03:55:37.780
so that you can algorithmically predict
link |
03:55:39.420
exactly how they'll fold very simply.
link |
03:55:41.580
So he designed the molecular scissors
link |
03:55:43.280
that have spiral ligamers that you could eat
link |
03:55:45.560
and would then cut through all the glucosamine
link |
03:55:49.220
and other crosslink proteins
link |
03:55:50.620
in your extracellular matrix, right?
link |
03:55:52.760
But to make that technology really work
link |
03:55:55.200
and be mature as several years of work,
link |
03:55:56.860
as far as I know, no one's finding it at the moment.
link |
03:56:00.140
So there's so many different ways
link |
03:56:02.380
that technology could be used to prolong longevity.
link |
03:56:05.080
What we really need,
link |
03:56:06.540
we need an integrated database of all biological knowledge
link |
03:56:09.580
about human beings and model organisms,
link |
03:56:12.020
like hopefully a massively distributed
link |
03:56:14.480
open cog bioatom space,
link |
03:56:15.980
but it can exist in other forms too.
link |
03:56:18.260
We need that data to be opened up
link |
03:56:20.860
in a suitably privacy protecting way.
link |
03:56:23.300
We need massive funding into machine learning,
link |
03:56:26.100
AGI, proto AGI statistical research
link |
03:56:29.240
aimed at solving biology,
link |
03:56:31.240
both molecular biology and human biology
link |
03:56:33.440
based on this massive data set, right?
link |
03:56:36.700
And then we need regulators not to stop people
link |
03:56:40.700
from trying radical therapies on themselves
link |
03:56:43.820
if they so wish to,
link |
03:56:46.180
as well as better cloud based platforms
link |
03:56:49.420
for like automated experimentation on microorganisms,
link |
03:56:52.720
flies and mice and so forth.
link |
03:56:54.300
And we could do all this.
link |
03:56:55.820
You look after the last financial crisis,
link |
03:56:58.900
Obama, who I generally like pretty well,
link |
03:57:01.300
but he gave $4 trillion to large banks
link |
03:57:03.740
and insurance companies.
link |
03:57:05.400
You know, now in this COVID crisis,
link |
03:57:08.420
trillions are being spent to help everyday people
link |
03:57:10.780
and small businesses.
link |
03:57:12.240
In the end, we'll probably will find many more trillions
link |
03:57:14.580
are being given to large banks and insurance companies.
link |
03:57:17.220
Anyway, like could the world put $10 trillion
link |
03:57:21.020
into making a massive holistic bio AI and bio simulation
link |
03:57:25.560
and experimental biology infrastructure?
link |
03:57:27.800
We could, we could put $10 trillion into that
link |
03:57:30.600
without even screwing us up too badly.
link |
03:57:32.300
Just as in the end COVID and the last financial crisis
link |
03:57:35.260
won't screw up the world economy so badly.
link |
03:57:37.900
We're not putting $10 trillion into that.
link |
03:57:39.900
Instead, all this research is siloed inside
link |
03:57:43.140
a few big companies and government agencies.
link |
03:57:46.820
And most of the data that comes from our individual bodies
link |
03:57:51.140
personally, that could feed this AI to solve aging
link |
03:57:54.340
and death, most of that data is sitting
link |
03:57:56.820
in some hospital's database doing nothing, right?
link |
03:58:03.960
I got two more quick questions for you.
link |
03:58:07.160
One, I know a lot of people are gonna ask me,
link |
03:58:09.820
you are on the Joe Rogan podcast
link |
03:58:11.740
wearing that same amazing hat.
link |
03:58:14.860
Do you have a origin story for the hat?
link |
03:58:17.500
Does the hat have its own story that you're able to share?
link |
03:58:21.420
The hat story has not been told yet.
link |
03:58:23.180
So we're gonna have to come back
link |
03:58:24.220
and you can interview the hat.
link |
03:58:27.880
We'll leave that for the hat's own interview.
link |
03:58:30.060
All right.
link |
03:58:30.900
It's too much to pack into.
link |
03:58:32.100
Is there a book?
link |
03:58:32.940
Is the hat gonna write a book?
link |
03:58:34.320
Okay.
link |
03:58:35.160
Well, it may transmit the information
link |
03:58:38.340
through direct neural transmission.
link |
03:58:40.020
Okay, so it's actually,
link |
03:58:41.420
there might be some Neuralink competition there.
link |
03:58:44.780
Beautiful, we'll leave it as a mystery.
link |
03:58:46.900
Maybe one last question.
link |
03:58:49.040
If you build an AGI system,
link |
03:58:54.580
you're successful at building the AGI system
link |
03:58:58.540
that could lead us to the singularity
link |
03:59:00.420
and you get to talk to her and ask her one question,
link |
03:59:04.560
what would that question be?
link |
03:59:05.960
We're not allowed to ask,
link |
03:59:08.140
what is the question I should be asking?
link |
03:59:10.220
Yeah, that would be cheating,
link |
03:59:12.220
but I guess that's a good question.
link |
03:59:14.040
I'm thinking of a,
link |
03:59:15.700
I wrote a story with Stefan Bugay once
link |
03:59:18.600
where these AI developers,
link |
03:59:23.380
they created a super smart AI
link |
03:59:25.900
aimed at answering all the philosophical questions
link |
03:59:31.220
that have been worrying them.
link |
03:59:32.060
Like what is the meaning of life?
link |
03:59:34.260
Is there free will?
link |
03:59:35.700
What is consciousness and so forth?
link |
03:59:37.980
So they got the super AGI built
link |
03:59:40.380
and it turned a while.
link |
03:59:43.300
It said, those are really stupid questions.
link |
03:59:46.580
And then it puts off on a spaceship and left the earth.
link |
03:59:51.420
So you'd be afraid of scaring it off.
link |
03:59:55.540
That's it, yeah.
link |
03:59:56.500
I mean, honestly, there is no one question
link |
04:00:01.500
that rises among all the others, really.
link |
04:00:08.540
I mean, what interests me more
link |
04:00:10.020
is upgrading my own intelligence
link |
04:00:13.500
so that I can absorb the whole world view of the super AGI.
link |
04:00:19.380
But I mean, of course, if the answer could be like,
link |
04:00:23.100
what is the chemical formula for the immortality pill?
link |
04:00:27.500
Like then I would do that or emit a bit string,
link |
04:00:33.340
which will be the code for a super AGI
link |
04:00:38.740
on the Intel i7 processor.
link |
04:00:41.220
So those would be good questions.
link |
04:00:42.860
So if your own mind was expanded
link |
04:00:46.260
to become super intelligent, like you're describing,
link |
04:00:49.340
I mean, there's kind of a notion
link |
04:00:53.500
that intelligence is a burden, that it's possible
link |
04:00:57.840
that with greater and greater intelligence,
link |
04:01:00.020
that other metric of joy that you mentioned
link |
04:01:03.020
becomes more and more difficult.
link |
04:01:04.740
What's your sense?
link |
04:01:05.900
Pretty stupid idea.
link |
04:01:08.260
So you think if you're super intelligent,
link |
04:01:09.860
you can also be super joyful?
link |
04:01:11.460
I think getting root access to your own brain
link |
04:01:15.460
will enable new forms of joy that we don't have now.
link |
04:01:19.220
And I think as I've said before,
link |
04:01:22.740
what I aim at is really make multiple versions of myself.
link |
04:01:27.820
So I would like to keep one version,
link |
04:01:30.180
which is basically human like I am now,
link |
04:01:33.580
but keep the dial to turn pain up and down
link |
04:01:36.980
and get rid of death, right?
link |
04:01:38.580
And make another version which fuses its mind
link |
04:01:43.640
with superhuman AGI,
link |
04:01:46.600
and then will become massively transhuman.
link |
04:01:50.060
And whether it will send some messages back
link |
04:01:52.800
to the human me or not will be interesting to find out.
link |
04:01:55.580
The thing is, once you're a super AGI,
link |
04:01:58.500
like one subjective second to a human
link |
04:02:01.540
might be like a million subjective years
link |
04:02:03.620
to that super AGI, right?
link |
04:02:04.980
So it would be on a whole different basis.
link |
04:02:07.580
I mean, at very least those two copies will be good to have,
link |
04:02:10.940
but it could be interesting to put your mind
link |
04:02:13.980
into a dolphin or a space amoeba
link |
04:02:16.860
or all sorts of other things.
link |
04:02:18.520
You can imagine one version that doubled its intelligence
link |
04:02:21.060
every year and another version that just became
link |
04:02:24.140
a super AGI as fast as possible, right?
link |
04:02:26.140
So, I mean, now we're sort of constrained to think
link |
04:02:29.780
one mind, one self, one body, right?
link |
04:02:33.020
But I think we actually, we don't need to be that
link |
04:02:36.260
constrained in thinking about future intelligence
link |
04:02:40.820
after we've mastered AGI and nanotechnology
link |
04:02:44.280
and longevity biology.
link |
04:02:47.820
I mean, then each of our minds
link |
04:02:49.540
is a certain pattern of organization, right?
link |
04:02:52.020
And I know we haven't talked about consciousness,
link |
04:02:54.300
but I sort of, I'm panpsychist.
link |
04:02:56.860
I sort of view the universe as conscious.
link |
04:03:00.080
And so, you know, a light bulb or a quark
link |
04:03:03.860
or an ant or a worm or a monkey
link |
04:03:06.040
have their own manifestations of consciousness.
link |
04:03:08.780
And the human manifestation of consciousness,
link |
04:03:11.900
it's partly tied to the particular meat
link |
04:03:15.580
that we're manifested by, but it's largely tied
link |
04:03:19.380
to the pattern of organization in the brain, right?
link |
04:03:22.360
So, if you upload yourself into a computer
link |
04:03:25.040
or a robot or whatever else it is,
link |
04:03:28.640
some element of your human consciousness may not be there
link |
04:03:31.780
because it's just tied to the biological embodiment.
link |
04:03:34.260
But I think most of it will be there.
link |
04:03:36.300
And these will be incarnations of your consciousness
link |
04:03:40.020
in a slightly different flavor.
link |
04:03:42.500
And, you know, creating these different versions
link |
04:03:45.600
will be amazing, and each of them will discover
link |
04:03:48.500
meanings of life that have some overlap,
link |
04:03:52.020
but probably not total overlap
link |
04:03:54.300
with the human Ben's meaning of life.
link |
04:03:59.260
The thing is, to get to that future
link |
04:04:02.940
where we can explore different varieties of joy,
link |
04:04:06.500
different variations of human experience and values
link |
04:04:09.680
and transhuman experiences and values to get to that future,
link |
04:04:13.140
we need to navigate through a whole lot of human bullshit
link |
04:04:16.780
of companies and governments and killer drones
link |
04:04:21.480
and making and losing money and so forth, right?
link |
04:04:25.460
And that's the challenge we're facing now
link |
04:04:28.580
is if we do things right,
link |
04:04:30.740
we can get to a benevolent singularity,
link |
04:04:33.580
which is levels of joy, growth, and choice
link |
04:04:36.320
that are literally unimaginable to human beings.
link |
04:04:39.920
If we do things wrong,
link |
04:04:41.720
we could either annihilate all life on the planet,
link |
04:04:44.120
or we could lead to a scenario where, say,
link |
04:04:47.060
all humans are annihilated and there's some super AGI
link |
04:04:52.140
that goes on and does its own thing unrelated to us
link |
04:04:55.460
except via our role in originating it.
link |
04:04:58.380
And we may well be at a bifurcation point now, right?
link |
04:05:02.420
Where what we do now has significant causal impact
link |
04:05:05.820
on what comes about,
link |
04:05:06.720
and yet most people on the planet
link |
04:05:09.040
aren't thinking that way whatsoever,
link |
04:05:11.540
they're thinking only about their own narrow aims
link |
04:05:16.220
and aims and goals, right?
link |
04:05:17.780
Now, of course, I'm thinking about my own narrow aims
link |
04:05:20.880
and goals to some extent also,
link |
04:05:24.260
but I'm trying to use as much of my energy and mind as I can
link |
04:05:29.480
to push toward this more benevolent alternative,
link |
04:05:33.200
which will be better for me,
link |
04:05:34.660
but also for everybody else.
link |
04:05:37.980
And it's weird that so few people understand
link |
04:05:42.540
what's going on.
link |
04:05:43.380
I know you interviewed Elon Musk,
link |
04:05:44.780
and he understands a lot of what's going on,
link |
04:05:47.380
but he's much more paranoid than I am, right?
link |
04:05:49.620
Because Elon gets that AGI
link |
04:05:52.040
is gonna be way, way smarter than people,
link |
04:05:54.260
and he gets that an AGI does not necessarily
link |
04:05:57.100
have to give a shit about people
link |
04:05:58.740
because we're a very elementary mode of organization
link |
04:06:01.660
of matter compared to many AGI's.
link |
04:06:04.700
But I don't think he has a clear vision
link |
04:06:06.340
of how infusing early stage AGI's
link |
04:06:10.140
with compassion and human warmth
link |
04:06:13.540
can lead to an AGI that loves and helps people
link |
04:06:18.020
rather than viewing us as a historical artifact
link |
04:06:22.860
and a waste of mass energy.
link |
04:06:26.200
But on the other hand,
link |
04:06:28.060
while I have some disagreements with him,
link |
04:06:29.600
like he understands way, way more of the story
link |
04:06:33.140
than almost anyone else
link |
04:06:34.820
in such a large scale corporate leadership position, right?
link |
04:06:38.180
It's terrible how little understanding
link |
04:06:40.740
of these fundamental issues exists out there now.
link |
04:06:45.060
That may be different five or 10 years from now though,
link |
04:06:47.220
because I can see understanding of AGI and longevity
link |
04:06:51.180
and other such issues is certainly much stronger
link |
04:06:54.620
and more prevalent now than 10 or 15 years ago, right?
link |
04:06:57.620
So I mean, humanity as a whole can be slow learners
link |
04:07:02.860
relative to what I would like,
link |
04:07:05.460
but on a historical sense, on the other hand,
link |
04:07:08.400
you could say the progress is astoundingly fast.
link |
04:07:11.220
But Elon also said, I think on the Joe Rogan podcast,
link |
04:07:15.640
that love is the answer.
link |
04:07:17.380
So maybe in that way, you and him are both on the same page
link |
04:07:21.820
of how we should proceed with AGI.
link |
04:07:24.420
I think there's no better place to end it.
link |
04:07:27.300
I hope we get to talk again about the hat
link |
04:07:30.860
and about consciousness
link |
04:07:32.020
and about a million topics we didn't cover.
link |
04:07:34.500
Ben, it's a huge honor to talk to you.
link |
04:07:36.340
Thank you for making it out.
link |
04:07:37.540
Thank you for talking today.
link |
04:07:39.540
Thanks for having me.
link |
04:07:40.440
This was really, really good fun
link |
04:07:44.380
and we dug deep into some very important things.
link |
04:07:47.420
So thanks for doing this.
link |
04:07:48.740
Thanks very much.
link |
04:07:49.820
Awesome.
link |
04:07:51.200
Thanks for listening to this conversation with Ben Gertzel
link |
04:07:53.860
and thank you to our sponsors,
link |
04:07:55.860
The Jordan Harbinger Show and Masterclass.
link |
04:07:59.380
Please consider supporting the podcast
link |
04:08:01.080
by going to jordanharbinger.com slash lex
link |
04:08:04.580
and signing up to Masterclass at masterclass.com slash lex.
link |
04:08:09.800
Click the links, buy the stuff.
link |
04:08:12.280
It's the best way to support this podcast
link |
04:08:14.220
and the journey I'm on in my research and startup.
link |
04:08:18.860
If you enjoy this thing, subscribe on YouTube,
link |
04:08:21.380
review it with five stars on a podcast,
link |
04:08:23.720
support it on Patreon or connect with me on Twitter
link |
04:08:26.860
at lexfriedman spelled without the E, just F R I D M A N.
link |
04:08:32.400
I'm sure eventually you will figure it out.
link |
04:08:35.280
And now let me leave you with some words from Ben Gertzel.
link |
04:08:39.140
Our language for describing emotions is very crude.
link |
04:08:42.540
That's what music is for.
link |
04:08:43.940
Thank you for listening and hope to see you next time.