back to index

Pamela McCorduck: Machines Who Think and the Early Days of AI | Lex Fridman Podcast #34


small model | large model

link |
00:00:00.000
The following is a conversation with Pamela McCordick. She's an author who has written on
link |
00:00:04.800
the history and the philosophical significance of artificial intelligence. Her books include
link |
00:00:10.400
Machines Who Think in 1979, The Fifth Generation in 1983 with Ed Feigenbaum, who's considered to
link |
00:00:18.160
be the father of expert systems, The Edge of Chaos that features women, and many more books.
link |
00:00:24.000
I came across her work in an unusual way by stumbling in a quote from Machines Who Think
link |
00:00:29.520
that is something like, artificial intelligence began with the ancient wish to forge the gods.
link |
00:00:37.040
That was a beautiful way to draw a connecting line between our societal relationship with AI
link |
00:00:42.960
from the grounded day to day science, math and engineering, to popular stories and science
link |
00:00:48.560
fiction and myths of automatons that go back for centuries. Through her literary work,
link |
00:00:54.800
she has spent a lot of time with the seminal figures of artificial intelligence, including
link |
00:01:00.480
the founding fathers of AI from the 1956 Dartmouth summer workshop where the field was launched.
link |
00:01:08.480
I reached out to Pamela for a conversation in hopes of getting a sense of what those early
link |
00:01:13.760
days were like, and how their dreams continue to reverberate through the work of our community
link |
00:01:19.200
today. I often don't know where the conversation may take us, but I jump in and see. Having no
link |
00:01:25.600
constraints, rules, or goals is a wonderful way to discover new ideas. This is the Artificial
link |
00:01:31.760
Intelligence Podcast. If you enjoy it, subscribe on YouTube, give it five stars on iTunes,
link |
00:01:37.840
support it on Patreon, or simply connect with me on Twitter, at Lex Friedman, spelled F R I D M
link |
00:01:44.720
A N. And now, here's my conversation with Pamela McCordick. In 1979, your book Machines Who Think
link |
00:01:55.040
was published. In it, you interview some of the early AI pioneers and explore the idea that
link |
00:02:00.720
AI was born not out of maybe math and computer science, but out of myth and legend. So, tell me
link |
00:02:10.400
if you could the story of how you first arrived at the book, the journey of beginning to write it.
link |
00:02:19.040
I had been a novelist. I'd published two novels, and I was sitting under the portal at Stanford
link |
00:02:29.120
one day, the house we were renting for the summer. And I thought, I should write a novel about these
link |
00:02:33.920
weird people in AI, I know. And then I thought, ah, don't write a novel, write a history. Simple.
link |
00:02:41.360
Just go around, interview them, splice it together, voila, instant book. Ha, ha, ha. It was
link |
00:02:48.240
much harder than that. But nobody else was doing it. And so, I thought, well, this is a great
link |
00:02:54.400
opportunity. And there were people who, John McCarthy, for example, thought it was a nutty
link |
00:03:03.760
idea. The field had not evolved yet, so on. And he had some mathematical thing he thought I should
link |
00:03:11.040
write instead. And I said, no, John, I am not a woman in search of a project. This is what I want
link |
00:03:17.840
to do. I hope you'll cooperate. And he said, oh, mutter, mutter, well, okay, it's your time.
link |
00:03:24.560
What was the pitch for the, I mean, such a young field at that point. How do you write
link |
00:03:30.800
a personal history of a field that's so young? I said, this is wonderful. The founders of the
link |
00:03:37.040
field are alive and kicking and able to talk about what they're doing. Did they sound or feel like
link |
00:03:42.720
founders at the time? Did they know that they have founded something?
link |
00:03:48.000
Oh, yeah. They knew what they were doing was very important. Very. What I now see in retrospect
link |
00:03:56.160
is that they were at the height of their research careers. And it's humbling to me that they took
link |
00:04:04.320
time out from all the things that they had to do as a consequence of being there. And to talk to
link |
00:04:11.440
this woman who said, I think I'm going to write a book about you. No, it was amazing. Just amazing.
link |
00:04:17.040
So who stands out to you? Maybe looking 63 years ago, the Dartmouth conference,
link |
00:04:26.480
so Marvin Minsky was there, McCarthy was there, Claude Shannon, Alan Newell, Herb Simon,
link |
00:04:32.960
some of the folks you've mentioned. Then there's other characters, right? One of your coauthors
link |
00:04:40.080
He wasn't at Dartmouth.
link |
00:04:43.120
He wasn't at Dartmouth.
link |
00:04:43.920
No. He was, I think, an undergraduate then.
link |
00:04:47.680
And of course, Joe Traub. All of these are players, not at Dartmouth, but in that era.
link |
00:04:56.000
Right.
link |
00:04:57.600
CMU and so on. So who are the characters, if you could paint a picture, that stand out to you
link |
00:05:02.960
from memory? Those people you've interviewed and maybe not, people that were just in the
link |
00:05:08.400
In the atmosphere.
link |
00:05:09.920
In the atmosphere.
link |
00:05:11.840
Of course, the four founding fathers were extraordinary guys. They really were.
link |
00:05:15.920
Who are the founding fathers?
link |
00:05:18.560
Alan Newell, Herbert Simon, Marvin Minsky, John McCarthy. They were the four who were not only
link |
00:05:24.800
at the Dartmouth conference, but Newell and Simon arrived there with a working program
link |
00:05:29.600
called The Logic Theorist. Everybody else had great ideas about how they might do it, but
link |
00:05:34.960
But they weren't going to do it yet.
link |
00:05:41.040
And you mentioned Joe Traub, my husband. I was immersed in AI before I met Joe
link |
00:05:50.080
because I had been Ed Feigenbaum's assistant at Stanford. And before that,
link |
00:05:55.040
I had worked on a book edited by Feigenbaum and Julian Feldman called Computers and Thought.
link |
00:06:04.320
It was the first textbook of readings of AI. And they only did it because they were trying to teach
link |
00:06:10.480
AI to people at Berkeley. And there was nothing, you'd have to send them to this journal and that
link |
00:06:15.040
journal. This was not the internet where you could go look at an article. So I was fascinated from
link |
00:06:22.240
the get go by AI. I was an English major. What did I know? And yet I was fascinated. And that's
link |
00:06:30.960
why you saw that historical, that literary background, which I think is very much a part
link |
00:06:38.080
of the continuum of AI, that AI grew out of that same impulse. That traditional, what was,
link |
00:06:47.600
what drew you to AI? How did you even think of it back then? What was the possibilities,
link |
00:06:54.880
the dreams? What was interesting to you? The idea of intelligence outside the human cranium,
link |
00:07:03.200
this was a phenomenal idea. And even when I finished Machines Who Think,
link |
00:07:08.960
I didn't know if they were going to succeed. In fact, the final chapter is very wishy washy,
link |
00:07:15.120
frankly. Succeed, the field did. Yeah. So was there the idea that AI began with the wish to
link |
00:07:25.760
forge the gods? So the spiritual component that we crave to create this other thing greater than
link |
00:07:33.760
ourselves. For those guys, I don't think so. Newell and Simon were cognitive psychologists.
link |
00:07:42.320
What they wanted was to simulate aspects of human intelligence,
link |
00:07:49.040
and they found they could do it on the computer. Minsky just thought it was a really cool thing
link |
00:07:57.280
to do. Likewise, McCarthy. McCarthy had got the idea in 1949 when he was a Caltech student.
link |
00:08:06.160
And he listened to somebody's lecture. It's in my book. I forget who it was. And he thought,
link |
00:08:15.520
oh, that would be fun to do. How do we do that? And he took a very mathematical approach.
link |
00:08:21.520
Minsky was hybrid, and Newell and Simon were very much cognitive psychology. How can we simulate
link |
00:08:29.440
various things about human cognition? What happened over the many years is, of course,
link |
00:08:37.120
our definition of intelligence expanded tremendously. These days, biologists are
link |
00:08:44.800
comfortable talking about the intelligence of the cell, the intelligence of the brain,
link |
00:08:49.240
not just human brain, but the intelligence of any kind of brain. Cephalopods, I mean, an octopus is
link |
00:09:00.560
really intelligent by any amount. We wouldn't have thought of that in the 60s, even the 70s.
link |
00:09:06.880
So all these things have worked in. And I did hear one behavioral primatologist, Franz De Waal,
link |
00:09:16.320
say, AI taught us the questions to ask. Yeah, this is what happens, right? When you try to build it,
link |
00:09:26.240
is when you start to actually ask questions. It puts a mirror to ourselves. Yeah, right. So you
link |
00:09:32.400
were there in the middle of it. It seems like not many people were asking the questions that
link |
00:09:38.880
you were, or just trying to look at this field the way you were. I was so low. When I went to
link |
00:09:45.920
get funding for this because I needed somebody to transcribe the interviews and I needed travel
link |
00:09:53.800
expenses, I went to everything you could think of, the NSF, the DARPA. There was an Air Force
link |
00:10:07.160
place that doled out money. And each of them said, well, that's a very interesting idea.
link |
00:10:15.480
But we'll think about it. And the National Science Foundation actually said to me in plain English,
link |
00:10:23.960
hey, you're only a writer. You're not a historian of science. And I said, yeah, that's true. But
link |
00:10:30.480
the historians of science will be crawling all over this field. I'm writing for the general
link |
00:10:35.400
audience, so I thought. And they still wouldn't budge. I finally got a private grant without
link |
00:10:43.880
knowing who it was from, from Ed Fredkin at MIT. He was a wealthy man, and he liked what he called
link |
00:10:51.400
crackpot ideas. And he considered this a crackpot idea, and he was willing to support it. I am ever
link |
00:10:58.680
grateful, let me say that. Some would say that a history of science approach to AI, or even just a
link |
00:11:06.720
history, or anything like the book that you've written, hasn't been written since. Maybe I'm
link |
00:11:13.760
not familiar, but it's certainly not many. If we think about bigger than just these couple of
link |
00:11:20.240
decades, few decades, what are the roots of AI? Oh, they go back so far. Yes, of course, there's
link |
00:11:30.640
all the legendary stuff, the Golem and the early robots of the 20th century. But they go back much
link |
00:11:41.240
further than that. If you read Homer, Homer has robots in the Iliad. And a classical scholar was
link |
00:11:49.680
pointing out to me just a few months ago, well, you said you just read the Odyssey. The Odyssey
link |
00:11:54.120
is full of robots. It is, I said? Yeah. How do you think Odysseus's ship gets from one place to
link |
00:12:00.800
another? He doesn't have the crew people to do that, the crewmen. Yeah, it's magic. It's robots.
link |
00:12:07.320
Oh, I thought, how interesting. So we've had this notion of AI for a long time. And then toward the
link |
00:12:17.240
end of the 19th century, the beginning of the 20th century, there were scientists who actually
link |
00:12:23.080
tried to make this happen some way or another, not successfully. They didn't have the technology for
link |
00:12:29.520
it. And of course, Babbage in the 1850s and 60s, he saw that what he was building was capable of
link |
00:12:40.080
intelligent behavior. And when he ran out of funding, the British government finally said,
link |
00:12:47.080
that's enough. He and Lady Lovelace decided, oh, well, why don't we play the ponies with this? He
link |
00:12:55.880
had other ideas for raising money too. But if we actually reach back once again, I think people
link |
00:13:02.400
don't actually really know that robots do appear and ideas of robots. You talk about the Hellenic
link |
00:13:09.160
and the Hebraic points of view. Oh, yes. Can you tell me about each? I defined it this way. The
link |
00:13:16.760
Hellenic point of view is robots are great. They are party help. They help this guy Hephaestus,
link |
00:13:25.160
this god Hephaestus in his forge. I presume he made them to help him and so on and so forth.
link |
00:13:32.560
And they welcome the whole idea of robots. The Hebraic view has to do with, I think it's the
link |
00:13:40.120
second commandment, thou shalt not make any graven image. In other words, you better not
link |
00:13:47.280
start imitating humans because that's just forbidden. It's the second commandment. And
link |
00:13:55.200
a lot of the reaction to artificial intelligence has been a sense that this is somehow wicked,
link |
00:14:08.800
this is somehow blasphemous. We shouldn't be going there. Now, you can say, yeah, but there are going
link |
00:14:17.600
to be some downsides. And I say, yes, there are, but blasphemy is not one of them.
link |
00:14:21.840
You know, there is a kind of fear that feels to be almost primal. Is there religious roots to that?
link |
00:14:29.800
Because so much of our society has religious roots. And so there is a feeling of, like you
link |
00:14:36.280
said, blasphemy of creating the other, of creating something, you know, it doesn't have to be
link |
00:14:43.800
artificial intelligence. It's creating life in general. It's the Frankenstein idea.
link |
00:14:48.640
There's the annotated Frankenstein on my coffee table. It's a tremendous novel. It really is just
link |
00:14:56.080
beautifully perceptive. Yes, we do fear this and we have good reason to fear it,
link |
00:15:03.880
but because it can get out of hand. Maybe you can speak to that fear,
link |
00:15:08.760
the psychology, if you've thought about it. You know, there's a practical set of fears,
link |
00:15:12.960
concerns in the short term. You can think if we actually think about artificial intelligence
link |
00:15:17.800
systems, you can think about bias of discrimination in algorithms. You can think about their social
link |
00:15:29.160
networks have algorithms that recommend the content you see, thereby these algorithms control
link |
00:15:35.520
the behavior of the masses. There's these concerns. But to me, it feels like the fear
link |
00:15:40.320
that people have is deeper than that. So have you thought about the psychology of it?
link |
00:15:46.280
I think in a superficial way I have. There is this notion that if we produce a machine that
link |
00:15:57.240
can think, it will outthink us and therefore replace us.
link |
00:16:01.240
I guess that's a primal fear of almost kind of a kind of mortality. So around the time you said
link |
00:16:11.960
you worked at Stanford with Ed Feigenbaum. So let's look at that one person. Throughout his
link |
00:16:21.760
history, clearly a key person, one of the many in the history of AI. How has he changed in general
link |
00:16:31.240
around him? How has Stanford changed in the last, how many years are we talking about here?
link |
00:16:36.440
Oh, since 65.
link |
00:16:38.400
65. So maybe it doesn't have to be about him. It could be bigger. But because he was a key
link |
00:16:45.000
person in expert systems, for example, how is that, how are these folks who you've interviewed in the
link |
00:16:54.160
70s, 79 changed through the decades?
link |
00:16:58.360
In Ed's case, I know him well. We are dear friends. We see each other every month or so. He told me
link |
00:17:12.240
that when Machines Who Think first came out, he really thought all the front matter was kind of
link |
00:17:17.040
bologna. And 10 years later, he said, no, I see what you're getting at. Yes, this is an impulse
link |
00:17:27.040
that has been a human impulse for thousands of years to create something outside the human
link |
00:17:34.800
cranium that has intelligence. I think it's very hard when you're down at the algorithmic level,
link |
00:17:46.000
and you're just trying to make something work, which is hard enough to step back and think of
link |
00:17:53.000
the big picture. It reminds me of when I was in Santa Fe, I knew a lot of archaeologists,
link |
00:17:59.720
which was a hobby of mine. And I would say, yeah, yeah, well, you can look at the shards and say,
link |
00:18:07.920
oh, this came from this tribe and this came from this trade route and so on. But what about the big
link |
00:18:14.080
picture? And a very distinguished archaeologist said to me, they don't think that way. No,
link |
00:18:21.840
they're trying to match the shard to where it came from. Where did the remainder of this corn
link |
00:18:30.520
come from? Was it grown here? Was it grown elsewhere? And I think this is part of any
link |
00:18:37.360
scientific field. You're so busy doing the hard work, and it is hard work, that you don't step
link |
00:18:46.800
back and say, oh, well, now let's talk about the general meaning of all this. Yes.
link |
00:18:53.120
So none of the even Minsky and McCarthy, they...
link |
00:18:58.320
Oh, those guys did. Yeah. The founding fathers did.
link |
00:19:01.840
Early on or later?
link |
00:19:03.920
Pretty early on. But in a different way from how I looked at it. The two cognitive psychologists,
link |
00:19:11.200
Newell and Simon, they wanted to imagine reforming cognitive psychology so that we would really,
link |
00:19:20.960
really understand the brain. Minsky was more speculative. And John McCarthy saw it as,
link |
00:19:32.960
I think I'm doing him right by this, he really saw it as a great boon for human beings to have
link |
00:19:40.320
this technology. And that was reason enough to do it. And he had wonderful, wonderful
link |
00:19:48.880
fables about how if you do the mathematics, you will see that these things are really good for
link |
00:19:56.800
human beings. And if you had a technological objection, he had an answer, a technological
link |
00:20:03.440
answer. But here's how we could get over that and then blah, blah, blah. And one of his favorite things
link |
00:20:10.320
was what he called the literary problem, which of course he presented to me several times.
link |
00:20:16.400
That is everything in literature, there are conventions in literature. One of the conventions
link |
00:20:23.680
is that you have a villain and a hero. And the hero in most literature is human,
link |
00:20:36.160
and the villain in most literature is a machine. And he said, that's just not the way it's going
link |
00:20:41.680
to be. But that's the way we're used to it. So when we tell stories about AI, it's always
link |
00:20:47.600
with this paradigm. I thought, yeah, he's right. Looking back, the classics RUR is certainly the
link |
00:20:57.040
machines trying to overthrow the humans. Frankenstein is different. Frankenstein is
link |
00:21:06.400
a creature. He never has a name. Frankenstein, of course, is the guy who created him, the human,
link |
00:21:13.440
Dr. Frankenstein. This creature wants to be loved, wants to be accepted. And it is only when
link |
00:21:22.320
Frankenstein turns his head, in fact, runs the other way. And the creature is without love,
link |
00:21:34.480
that he becomes the monster that he later becomes.
link |
00:21:38.560
So who's the villain in Frankenstein? It's unclear, right?
link |
00:21:43.840
Oh, it is unclear, yeah.
link |
00:21:45.520
It's really the people who drive him. By driving him away, they bring out the worst.
link |
00:21:54.240
That's right. They give him no human solace. And he is driven away, you're right.
link |
00:22:00.800
He becomes, at one point, the friend of a blind man. And he serves this blind man,
link |
00:22:08.160
and they become very friendly. But when the sighted people of the blind man's family come in,
link |
00:22:14.880
ah, you've got a monster here. So it's very didactic in its way. And what I didn't know
link |
00:22:23.040
is that Mary Shelley and Percy Shelley were great readers of the literature surrounding abolition
link |
00:22:31.120
in the United States, the abolition of slavery. And they picked that up wholesale. You are making
link |
00:22:38.720
monsters of these people because you won't give them the respect and love that they deserve.
link |
00:22:44.000
Do you have, if we get philosophical for a second, do you worry that once we create
link |
00:22:52.000
machines that are a little bit more intelligent, let's look at Roomba, the vacuums, the cleaner,
link |
00:22:58.080
that this darker part of human nature where we abuse the other, the somebody who's different,
link |
00:23:08.800
will come out?
link |
00:23:09.600
I don't worry about it. I could imagine it happening. But I think that what AI has to offer
link |
00:23:18.560
the human race will be so attractive that people will be won over.
link |
00:23:25.760
So you have looked deep into these people, had deep conversations, and it's interesting to get
link |
00:23:32.480
a sense of stories of the way they were thinking and the way it was changed, the way your own
link |
00:23:42.720
thinking about AI has changed. So you mentioned McCarthy. What about the years at CMU, Carnegie
link |
00:23:51.840
Mellon, with Joe? Sure. Joe was not in AI. He was in algorithmic complexity.
link |
00:24:03.440
Was there always a line between AI and computer science, for example?
link |
00:24:07.280
Is AI its own place of outcasts? Was that the feeling?
link |
00:24:10.880
There was a kind of outcast period for AI. For instance, in 1974, the new field was hardly 10
link |
00:24:24.560
years old. The new field of computer science was asked by the National Science Foundation,
link |
00:24:31.680
I believe, but it may have been the National Academies, I can't remember,
link |
00:24:34.400
to tell your fellow scientists where computer science is and what it means.
link |
00:24:44.160
And they wanted to leave out AI. And they only agreed to put it in because Don Knuth said,
link |
00:24:53.520
hey, this is important. You can't just leave that out.
link |
00:24:57.280
Really? Don, dude?
link |
00:24:58.240
Don Knuth, yes.
link |
00:24:59.680
I talked to him recently, too. Out of all the people.
link |
00:25:02.960
Yes. But you see, an AI person couldn't have made that argument. He wouldn't have been believed.
link |
00:25:08.640
But Knuth was believed. Yes.
link |
00:25:10.800
So Joe Traub worked on the real stuff.
link |
00:25:15.200
Joe was working on algorithmic complexity. But he would say in plain English again and again,
link |
00:25:22.160
the smartest people I know are in AI.
link |
00:25:24.720
Really?
link |
00:25:25.280
Oh, yes. No question. Anyway, Joe loved these guys. What happened was that I guess it was
link |
00:25:35.760
as I started to write Machines Who Think, Herb Simon and I became very close friends.
link |
00:25:41.360
He would walk past our house on Northumberland Street every day after work. And I would just
link |
00:25:47.200
be putting my cover on my typewriter. And I would lean out the door and say,
link |
00:25:52.160
Herb, would you like a sherry? And Herb almost always would like a sherry. So he'd stop in
link |
00:25:59.440
and we'd talk for an hour, two hours. My journal says we talked this afternoon for three hours.
link |
00:26:06.720
What was on his mind at the time in terms of on the AI side of things?
link |
00:26:11.680
Oh, we didn't talk too much about AI. We talked about other things.
link |
00:26:14.640
Just life.
link |
00:26:15.680
We both love literature. And Herb had read Proust in the original French twice all the
link |
00:26:24.000
way through. I can't. I've read it in English in translation. So we talked about literature.
link |
00:26:30.480
We talked about languages. We talked about music because he loved music. We talked about
link |
00:26:36.240
art because he was actually enough of a painter that he had to give it up because he was afraid
link |
00:26:44.960
it was interfering with his research and so on. So no, it was really just chat, chat.
link |
00:26:51.520
But it was very warm. So one summer I said to Herb, my students have all the really
link |
00:26:59.360
interesting conversations. I was teaching at the University of Pittsburgh then in the English
link |
00:27:03.920
department. They get to talk about the meaning of life and that kind of thing. And what do I have?
link |
00:27:09.920
I have university meetings where we talk about the photocopying budget and whether the course
link |
00:27:17.040
on romantic poetry should be one semester or two. So Herb laughed. He said, yes, I know what you
link |
00:27:23.040
mean. He said, but you could do something about that. Dot, that was his wife, Dot and I used to
link |
00:27:30.640
have a salon at the University of Chicago every Sunday night. And we would have essentially an
link |
00:27:38.560
open house and people knew. It wasn't for a small talk. It was really for some topic of
link |
00:27:47.600
depth. He said, but my advice would be that you choose the topic ahead of time. Fine, I said.
link |
00:27:54.480
So we exchanged mail over the summer. That was US Post in those days because
link |
00:28:01.680
you didn't have personal email. And I decided I would organize it and there would be eight of us,
link |
00:28:12.000
Alan Noland, his wife, Herb Simon and his wife Dorothea. There was a novelist in town,
link |
00:28:21.200
a man named Mark Harris. He had just arrived and his wife Josephine. Mark was most famous then for
link |
00:28:29.680
a novel called Bang the Drum Slowly, which was about baseball. And Joe and me, so eight people.
link |
00:28:36.720
And we met monthly and we just sank our teeth into really hard topics and it was great fun.
link |
00:28:45.760
TK How have your own views around artificial intelligence changed
link |
00:28:53.600
through the process of writing Machines Who Think and afterwards, the ripple effects?
link |
00:28:57.440
RL I was a little skeptical that this whole thing would work out. It didn't matter. To me,
link |
00:29:04.160
it was so audacious. AI generally. And in some ways, it hasn't worked out the way I expected
link |
00:29:16.800
so far. That is to say, there's this wonderful lot of apps, thanks to deep learning and so on.
link |
00:29:26.880
But those are algorithmic. And in the part of symbolic processing, there's very little yet.
link |
00:29:39.120
And that's a field that lies waiting for industrious graduate students.
link |
00:29:45.600
TK Maybe you can tell me some figures that popped up in your life in the 80s with expert systems
link |
00:29:53.040
where there was the symbolic AI possibilities of what most people think of as AI,
link |
00:30:00.960
if you dream of the possibilities of AI, it's really expert systems. And those hit a few walls
link |
00:30:07.520
and there was challenges there. And I think, yes, they will reemerge again with some new
link |
00:30:12.080
breakthroughs and so on. But what did that feel like, both the possibility and the winter that
link |
00:30:17.760
followed the slowdown in research? BG Ah, you know, this whole thing about AI winter is to me
link |
00:30:25.040
a crock. TK Snow winters.
link |
00:30:26.960
BG Because I look at the basic research that was being done in the 80s, which is supposed to be,
link |
00:30:34.480
my God, it was really important. It was laying down things that nobody had thought about before,
link |
00:30:40.320
but it was basic research. You couldn't monetize it. Hence the winter.
link |
00:30:44.880
TK That's the winter. BG You know, research,
link |
00:30:49.120
scientific research goes and fits and starts. It isn't this nice smooth,
link |
00:30:54.240
oh, this follows this follows this. No, it just doesn't work that way.
link |
00:30:59.040
TK The interesting thing, the way winters happen, it's never the fault of the researchers.
link |
00:31:05.760
It's the some source of hype over promising. Well, no, let me take that back. Sometimes it
link |
00:31:12.000
is the fault of the researchers. Sometimes certain researchers might over promise the
link |
00:31:17.200
possibilities. They themselves believe that we're just a few years away. Sort of just recently
link |
00:31:23.520
talked to Elon Musk and he believes he'll have an autonomous vehicle, will have autonomous vehicles
link |
00:31:28.160
in a year. And he believes it. BG A year?
link |
00:31:30.640
TK A year. Yeah. With mass deployment of a time.
link |
00:31:33.360
BG For the record, this is 2019 right now. So he's talking 2020.
link |
00:31:38.640
TK To do the impossible, you really have to believe it. And I think what's going to happen
link |
00:31:44.480
when you believe it, because there's a lot of really brilliant people around him,
link |
00:31:48.240
is some good stuff will come out of it. Some unexpected brilliant breakthroughs will come out
link |
00:31:53.840
of it when you really believe it, when you work that hard. BG I believe that. And I believe
link |
00:31:58.480
autonomous vehicles will come. I just don't believe it'll be in a year. I wish.
link |
00:32:02.640
TK But nevertheless, there's, autonomous vehicles is a good example. There's a feeling
link |
00:32:09.120
many companies have promised by 2021, by 2022, Ford, GM, basically every single automotive
link |
00:32:16.640
company has promised they'll have autonomous vehicles. So that kind of over promise is what
link |
00:32:21.440
leads to the winter. Because we'll come to those dates, there won't be autonomous vehicles.
link |
00:32:26.720
BG And there'll be a feeling, well, wait a minute, if we took your word at that time,
link |
00:32:32.080
that means we just spent billions of dollars, had made no money, and there's a counter response to
link |
00:32:39.680
where everybody gives up on it. Sort of intellectually, at every level, the hope just
link |
00:32:46.880
dies. And all that's left is a few basic researchers. So you're uncomfortable with
link |
00:32:52.960
some aspects of this idea. TK Well, it's the difference between science and commerce.
link |
00:32:58.400
BG So you think science goes on the way it does?
link |
00:33:04.160
TK Oh, science can really be killed by not getting proper funding or timely funding.
link |
00:33:14.160
I think Great Britain was a perfect example of that. The Lighthill report in,
link |
00:33:19.440
I can't remember the year, essentially said, there's no use Great Britain putting any money
link |
00:33:26.560
into this, it's going nowhere. And this was all about social factions in Great Britain.
link |
00:33:37.040
Edinburgh hated Cambridge and Cambridge hated Manchester. Somebody else can write that story.
link |
00:33:44.720
But it really did have a hard effect on research there. Now, they've come roaring back with Deep
link |
00:33:54.400
Mind. But that's one guy and his visionaries around him. BG But just to push on that,
link |
00:34:03.760
it's kind of interesting. You have this dislike of the idea of an AI winter.
link |
00:34:08.320
Where's that coming from? Where were you? TK Oh, because I just don't think it's true.
link |
00:34:15.440
BG There was a particular period of time. It's a romantic notion, certainly.
link |
00:34:21.280
TK Yeah, well. No, I admire science, perhaps more than I admire commerce. Commerce is fine. Hey,
link |
00:34:33.280
you know, we all gotta live. But science has a much longer view than commerce and continues
link |
00:34:46.720
almost regardless. It can't continue totally regardless, but almost regardless of what's
link |
00:34:56.400
saleable and what's not, what's monetizable and what's not. BG So the winter is just something
link |
00:35:01.680
that happens on the commerce side, and the science marches. That's a beautifully optimistic
link |
00:35:10.960
and inspiring message. I agree with you. I think if we look at the key people that work in AI,
link |
00:35:16.400
that work in key scientists in most disciplines, they continue working out of the love for science.
link |
00:35:22.160
You can always scrape up some funding to stay alive, and they continue working diligently.
link |
00:35:31.680
But there certainly is a huge amount of funding now, and there's a concern on the AI side and
link |
00:35:38.080
deep learning. There's a concern that we might, with over promising, hit another slowdown in
link |
00:35:44.160
funding, which does affect the number of students, you know, that kind of thing.
link |
00:35:47.520
RG Yeah, it does. BG So the kind of ideas you had in Machines Who Think,
link |
00:35:52.640
did you continue that curiosity through the decades that followed?
link |
00:35:56.240
RG Yes, I did. BG And what was your view, historical view of how AI community evolved,
link |
00:36:03.840
the conversations about it, the work? Has it persisted the same way from its birth?
link |
00:36:09.280
RG No, of course not. It's just as we were just talking, the symbolic AI really kind of dried up
link |
00:36:19.760
and it all became algorithmic. I remember a young AI student telling me what he was doing,
link |
00:36:27.200
and I had been away from the field long enough. I'd gotten involved with complexity at the Santa
link |
00:36:33.200
Fe Institute. I thought, algorithms, yeah, they're in the service of, but they're not the main event.
link |
00:36:41.680
No, they became the main event. That surprised me. And we all know the downside of this. We all
link |
00:36:49.440
know that if you're using an algorithm to make decisions based on a gazillion human decisions,
link |
00:36:58.240
baked into it are all the mistakes that humans make, the bigotries, the short sightedness,
link |
00:37:05.440
and so on and so on. BG So you mentioned Santa Fe Institute. So you've written the novel
link |
00:37:13.280
Edge of Chaos, but it's inspired by the ideas of complexity, a lot of which have been extensively
link |
00:37:20.720
explored at the Santa Fe Institute. It's another fascinating topic, just sort of emergent
link |
00:37:31.200
complexity from chaos. Nobody knows how it happens really, but it seems to where all the interesting
link |
00:37:37.440
stuff does happen. So how did first, not your novel, but just complexity in general and the
link |
00:37:44.480
work at Santa Fe, fit into the bigger puzzle of the history of AI? Or maybe even your personal
link |
00:37:51.600
journey through that? RG One of the last projects I did
link |
00:37:57.760
concerning AI in particular was looking at the work of Harold Cohen, the painter. And Harold was
link |
00:38:06.080
deeply involved with AI. He was a painter first. And what his project, ARIN, which was a lifelong
link |
00:38:17.920
project, did was reflect his own cognitive processes. Okay. Harold and I, even though I wrote
link |
00:38:30.480
a book about it, we had a lot of friction between us. And I went, I thought, this is it. The book
link |
00:38:39.120
died. It was published and fell into a ditch. This is it. I'm finished. It's time for me to
link |
00:38:47.760
do something different. By chance, this was a sabbatical year for my husband. And we spent two
link |
00:38:55.840
months at the Santa Fe Institute and two months at Caltech. And then the spring semester in Munich,
link |
00:39:03.120
Germany. Okay. Those two months at the Santa Fe Institute were so restorative for me. And I began
link |
00:39:15.040
to, the Institute was very small then. It was in some kind of office complex on old Santa Fe trail.
link |
00:39:22.560
Everybody kept their door open. So you could crack your head on a problem. And if you finally didn't
link |
00:39:29.840
get it, you could walk in to see Stuart Kaufman or any number of people and say, I don't get this.
link |
00:39:39.040
Can you explain? And one of the people that I was talking to about complex adaptive systems
link |
00:39:46.880
was Murray Gelman. And I told Murray what Harold Cohen had done. And I said, you know,
link |
00:39:55.200
this sounds to me like a complex adaptive system. And he said, yeah, it is. Well, what do you know?
link |
00:40:02.240
Harold Aaron had all these kids and cousins all over the world in science and in economics and
link |
00:40:09.120
so on and so forth. I was so relieved. I thought, okay, your instincts are okay. You're doing the
link |
00:40:16.480
right thing. I didn't have the vocabulary. And that was one of the things that the Santa Fe
link |
00:40:21.760
Institute gave me. If I could have rewritten that book, no, it had just come out. I couldn't rewrite
link |
00:40:26.880
it. I would have had a vocabulary to explain what Aaron was doing. Okay. So I got really interested
link |
00:40:34.480
in what was going on at the Institute. The people were, again, bright and funny and willing to
link |
00:40:44.080
explain anything to this amateur. George Cowan, who was then the head of the Institute, said he
link |
00:40:51.600
thought it might be a nice idea if I wrote a book about the Institute. And I thought about it and I
link |
00:40:58.800
had my eye on some other project, God knows what. And I said, I'm sorry, George. Yeah, I'd really
link |
00:41:05.920
love to do it, but just not going to work for me at this moment. He said, oh, too bad. I think it
link |
00:41:11.440
would make an interesting book. Well, he was right and I was wrong. I wish I'd done it. But that's
link |
00:41:17.120
interesting. I hadn't thought about that, that that was a road not taken that I wish I'd taken.
link |
00:41:22.080
Well, you know what? Just on that point, it's quite brave for you as a writer, as sort of
link |
00:41:31.680
coming from a world of literature and the literary thinking and historical thinking. I mean, just
link |
00:41:37.120
from that world and bravely talking to quite, I assume, large egos in AI or in complexity.
link |
00:41:49.600
Yeah, in AI or in complexity and so on. How'd you do it? I mean, I suppose they could be
link |
00:41:59.040
intimidated of you as well because it's two different worlds coming together.
link |
00:42:03.120
I never picked up that anybody was intimidated by me.
link |
00:42:06.080
But how were you brave enough? Where did you find the guts to sort of...
link |
00:42:08.640
God, just dumb luck. I mean, this is an interesting rock to turn over. I'm going
link |
00:42:14.000
to write a book about it. And you know, people have enough patience with writers
link |
00:42:18.880
if they think they're going to end up in a book that they let you flail around and so on.
link |
00:42:24.800
Well, but they also look if the writer has,
link |
00:42:28.320
if there's a sparkle in their eye, if they get it.
link |
00:42:31.120
Yeah, sure.
link |
00:42:32.640
When were you at the Santa Fe Institute?
link |
00:42:35.920
The time I'm talking about is 1990, 1991, 1992. But we then, because Joe was an external faculty
link |
00:42:46.240
member, were in Santa Fe every summer. We bought a house there and I didn't have that much to do
link |
00:42:52.640
with the Institute anymore. I was writing my novels. I was doing whatever I was doing.
link |
00:43:00.560
But I loved the Institute and I loved
link |
00:43:08.400
again, the audacity of the ideas. That really appeals to me.
link |
00:43:12.960
I think that there's this feeling, much like in great institutes of neuroscience, for example,
link |
00:43:23.040
that they're in it for the long game of understanding something fundamental about
link |
00:43:29.840
reality and nature. And that's really exciting. So if we start now to look a little bit more recently,
link |
00:43:36.800
how, you know, AI is really popular today. How is this world, you mentioned algorithmic,
link |
00:43:46.480
but in general, is the spirit of the people, the kind of conversations you hear through the
link |
00:43:51.680
grapevine and so on, is that different than the roots that you remember?
link |
00:43:55.360
No. The same kind of excitement, the same kind of, this is really going to make a difference
link |
00:44:01.200
in the world. And it will. It has. You know, a lot of folks, especially young, 20 years old or
link |
00:44:07.920
something, they think we've just found something special here. We're going to change the world
link |
00:44:14.000
tomorrow. On a time scale, do you have a sense of what, of the time scale at which breakthroughs
link |
00:44:24.240
of the time scale at which breakthroughs in AI happen? I really don't. Because look at Deep Learning.
link |
00:44:32.240
That was, Jeffrey Hinton came up with the algorithm in 86. But it took all these years
link |
00:44:44.720
for the technology to be good enough to actually be applicable. So no, I can't predict that at all.
link |
00:44:56.400
I can't. I wouldn't even try. Well, let me ask you to, not to try to predict, but to speak to the,
link |
00:45:03.760
you know, I'm sure in the 60s, as it continues now, there's people that think, let's call it,
link |
00:45:09.440
we can call it this fun word, the singularity. When there's a phase shift, there's some profound
link |
00:45:16.160
feeling where we're all really surprised by what's able to be achieved. I'm sure those dreams are
link |
00:45:22.720
there. I remember reading quotes in the 60s and those continued. How have your own views,
link |
00:45:29.200
maybe if you look back, about the timeline of a singularity changed?
link |
00:45:34.960
Well, I'm not a big fan of the singularity as Ray Kurzweil has presented it.
link |
00:45:46.640
How would you define the Ray Kurzweil? How do you think of singularity in those?
link |
00:45:53.120
If I understand Kurzweil's view, it's sort of, there's going to be this moment when machines
link |
00:45:59.280
are smarter than humans and, you know, game over. However, the game over is. I mean, do they put us
link |
00:46:07.120
on a reservation? Do they, et cetera, et cetera. And first of all, machines are smarter than humans
link |
00:46:15.680
in some ways all over the place. And they have been since adding machines were invented.
link |
00:46:21.440
So it's not, it's not going to come like some great eatable crossroads, you know, where
link |
00:46:29.440
they meet each other and our offspring, Oedipus says, you're dead. It's just not going to happen.
link |
00:46:37.920
Yeah. So it's already game over with calculators, right? They're already out to do much better at
link |
00:46:44.000
basic arithmetic than us. But you know, there's a human like intelligence. And it's not the ones
link |
00:46:51.920
that destroy us, but you know, somebody that you can have as a, as a friend, you can have deep
link |
00:46:57.920
connections with that kind of passing the touring test and beyond those kinds of ideas. Have you
link |
00:47:04.640
dreamt of those? Oh yes, yes, yes. Those possibilities. In a book I wrote with Ed Feigenbaum,
link |
00:47:10.560
a book I wrote with Ed Feigenbaum, there's a little story called the geriatric robot.
link |
00:47:17.280
And how I came up with the geriatric robot is a story in itself. But here's what the geriatric
link |
00:47:24.880
robot does. It doesn't just clean you up and feed you and wheel you out into the sun.
link |
00:47:29.520
It's great advantages. It listens. It says, tell me again about the great coup of 73. Tell me again
link |
00:47:45.280
about how awful or how wonderful your grandchildren are and so on and so forth.
link |
00:47:52.960
And it isn't hanging around to inherit your money. It isn't hanging around because it can't get
link |
00:47:59.440
any other job. This is his job. And so on and so forth. Well, I would love something like that.
link |
00:48:09.120
Yeah. I mean, for me, that deeply excites me. So I think there's a lot of us.
link |
00:48:15.680
Lex, you gotta know, it was a joke. I dreamed it up because I needed to talk to college students
link |
00:48:20.880
and I needed to give them some idea of what AI might be. And they were rolling in the aisles as
link |
00:48:26.960
I elaborated and elaborated and elaborated. When it went into the book, they took my hide off
link |
00:48:36.320
in the New York Review of Books. This is just what we have thought about these people in AI.
link |
00:48:41.280
They're inhuman. Come on, get over it. Don't you think that's a good thing for
link |
00:48:47.280
the world that AI could potentially do? I do. Absolutely. And furthermore,
link |
00:48:52.000
I'm pushing 80 now. By the time I need help like that, I also want it to roll itself in a corner
link |
00:49:02.560
and shut the fuck up. Let me linger on that point. Do you really though?
link |
00:49:09.360
Yeah, I do. Here's why. Don't you want it to push back a little bit?
link |
00:49:13.360
A little. But I have watched my friends go through the whole issue around having help
link |
00:49:20.240
in the house. And some of them have been very lucky and had fabulous help. And some of them
link |
00:49:28.880
have had people in the house who want to keep the television going on all day, who want to talk on
link |
00:49:34.000
their phones all day. No. Just roll yourself in the corner and shut the fuck up. Unfortunately,
link |
00:49:41.360
us humans, when we're assistants, we're still, even when we're assisting others,
link |
00:49:47.040
we care about ourselves more. Of course. And so you create more frustration. And a robot AI
link |
00:49:54.800
assistant can really optimize the experience for you. I was just speaking to the point,
link |
00:50:01.520
you actually bring up a very, very good point. But I was speaking to the fact that
link |
00:50:05.360
us humans are a little complicated, that we don't necessarily want a perfect servant.
link |
00:50:11.120
I don't, maybe you disagree with that, but there's a, I think there's a push and pull with humans.
link |
00:50:20.800
You're right.
link |
00:50:21.360
A little tension, a little mystery that, of course, that's really difficult for AI to get right. But
link |
00:50:27.680
I do sense, especially today with social media, that people are getting more and more lonely,
link |
00:50:34.800
even young folks, and sometimes especially young folks, that loneliness, there's a longing for
link |
00:50:42.080
connection and AI can help alleviate some of that loneliness. Some, just somebody who listens,
link |
00:50:50.800
like in person. So to speak. So to speak, yeah. So to speak. Yeah, that to me is really exciting.
link |
00:51:03.200
That is really exciting. But so if we look at that, that level of intelligence, which is
link |
00:51:08.880
exceptionally difficult to achieve actually, as the singularity or whatever, that's the human level
link |
00:51:15.520
bar, that people have dreamt of that too. Turing dreamt of it. He had a date timeline. Do you have,
link |
00:51:23.920
how have your own timeline evolved on past?
link |
00:51:27.840
I don't even think about it.
link |
00:51:28.960
You don't even think?
link |
00:51:29.680
No. Just this field has been so full of surprises for me.
link |
00:51:38.080
You're just taking in and see the fun about the basic science.
link |
00:51:42.080
Yeah. I just can't. Maybe that's because I've been around the field long enough to think,
link |
00:51:48.960
you know, don't go that way. Herb Simon was terrible about making these predictions of
link |
00:51:54.720
when this and that would happen. And he was a sensible guy.
link |
00:52:00.640
His quotes are often used, right?
link |
00:52:03.360
As a legend, yeah.
link |
00:52:04.880
Yeah. Do you have concerns about AI, the existential threats that many people
link |
00:52:14.800
like Elon Musk and Sam Harris and others are thinking about?
link |
00:52:18.800
Yeah. That takes up half a chapter in my book. I call it the male gaze.
link |
00:52:29.600
Well, you hear me out. The male gaze is actually a term from film criticism.
link |
00:52:36.240
And I'm blocking on the women who dreamed this up. But she pointed out how most movies were
link |
00:52:44.240
made from the male point of view, that women were objects, not subjects. They didn't have any
link |
00:52:53.760
agency and so on and so forth. So when Elon and his pals Hawking and so on came,
link |
00:53:01.520
AI is going to eat our lunch and our dinner and our midnight snack too, I thought, what?
link |
00:53:08.000
And I said to Ed Feigenbaum, oh, this is the first guy. First, these guys have always been
link |
00:53:13.120
the smartest guy on the block. And here comes something that might be smarter. Oh, let's stamp
link |
00:53:18.800
it out before it takes over. And Ed laughed. He said, I didn't think about it that way.
link |
00:53:24.080
But I did. I did. And it is the male gaze. Okay, suppose these things do have agency.
link |
00:53:34.480
Well, let's wait and see what happens. Can we imbue them with ethics? Can we imbue them
link |
00:53:43.920
with a sense of empathy? Or are they just going to be, I don't know, we've had centuries of guys
link |
00:53:54.480
like that. That's interesting that the ego, the male gaze is immediately threatened. And so you
link |
00:54:05.280
can't think in a patient, calm way of how the tech could evolve. Speaking of which, your 96 book,
link |
00:54:16.240
The Future of Women, I think at the time and now, certainly now, I mean, I'm sorry, maybe at the
link |
00:54:23.840
time, but I'm more cognizant of now, is extremely relevant. You and Nancy Ramsey talk about four
link |
00:54:30.800
possible futures of women in science and tech. So if we look at the decades before and after
link |
00:54:38.960
the book was released, can you tell a history, sorry, of women in science and tech and how it
link |
00:54:46.800
has evolved? How have things changed? Where do we stand? Not enough. They have not changed enough.
link |
00:54:54.320
The way that women are ground down in computing is simply unbelievable. But what are the four
link |
00:55:05.840
possible futures for women in tech from the book? What you're really looking at are various aspects
link |
00:55:13.520
of the present. So for each of those, you could say, oh yeah, we do have backlash. Look at what's
link |
00:55:20.880
happening with abortion and so on and so forth. We have one step forward, one step back.
link |
00:55:28.400
The golden age of equality was the hardest chapter to write. And I used something from
link |
00:55:33.440
the Santa Fe Institute, which is the sandpile effect, that you drop sand very slowly onto a pile
link |
00:55:41.760
and it grows and it grows and it grows until suddenly it just breaks apart. And
link |
00:55:50.240
in a way, Me Too has done that. That was the last drop of sand that broke everything apart.
link |
00:55:58.240
That was a perfect example of the sandpile effect. And that made me feel good. It didn't
link |
00:56:03.760
change all of society, but it really woke a lot of people up. But are you in general optimistic
link |
00:56:10.480
about maybe after Me Too? I mean, Me Too is about a very specific kind of thing.
link |
00:56:17.120
Boy, solve that and you solve everything.
link |
00:56:19.920
But are you in general optimistic about the future?
link |
00:56:23.200
Yes. I'm a congenital optimistic. I can't help it.
link |
00:56:28.400
What about AI? What are your thoughts about the future of AI?
link |
00:56:34.560
Of course, I get asked, what do you worry about? And the one thing I worry about is the things
link |
00:56:40.080
we can't anticipate. There's going to be something out of left field that we will just say,
link |
00:56:47.440
we weren't prepared for that. I am generally optimistic. When I first took up
link |
00:56:58.240
being interested in AI, like most people in the field, more intelligence was like more virtue.
link |
00:57:05.760
You know, what could be bad? And in a way, I still believe that. But I realize that my
link |
00:57:13.520
notion of intelligence has broadened. There are many kinds of intelligence,
link |
00:57:19.440
and we need to imbue our machines with those many kinds.
link |
00:57:24.720
So you've now just finished or in the process of finishing the book that you've been working
link |
00:57:32.560
on, the memoir, how have you changed? I know it's just writing, but how have you changed
link |
00:57:39.440
the process? If you look back, what kind of stuff did it bring up to you that surprised you,
link |
00:57:47.600
looking at the entirety of it all? The biggest thing, and it really wasn't a surprise,
link |
00:57:55.840
is how lucky I was. Oh, my. To have access to the beginning of a scientific field that is going to
link |
00:58:07.520
change the world. How did I luck out? And yes, of course, my view of things has widened a lot.
link |
00:58:20.240
If I can get back to one feminist part of our conversation. Without knowing it,
link |
00:58:28.640
it really was subconscious. I wanted AI to succeed because I was so tired of hearing
link |
00:58:36.320
that intelligence was inside the male cranium. And I thought if there was something out there
link |
00:58:43.280
that wasn't a male thinking and doing well, then that would put a lie to this whole notion of
link |
00:58:53.040
intelligence resides in the male cranium. I did not know that until one night Harold Cohen and I
link |
00:59:01.600
were having a glass of wine, maybe two, and he said, what drew you to AI? And I said, oh,
link |
00:59:09.600
you know, smartest people I knew, great project, blah, blah, blah. And I said, and I wanted
link |
00:59:14.720
something besides male smarts. And it just bubbled up out of me like, what?
link |
00:59:24.160
It's kind of brilliant, actually. So AI really humbles all of us and humbles the people that
link |
00:59:32.000
need to be humbled the most. Let's hope.
link |
00:59:35.360
Wow. That is so beautiful. Pamela, thank you so much for talking to me. It's really a huge honor.
link |
00:59:40.800
It's been a great pleasure.
link |
00:59:41.840
Thank you.