back to index

Pamela McCorduck: Machines Who Think and the Early Days of AI | Lex Fridman Podcast #34


small model | large model

link |
00:00:00.000
The following is a conversation with Pamela McCordick. She's an author who has written
link |
00:00:04.640
on the history and the philosophical significance of artificial intelligence.
link |
00:00:09.040
Her books include Machines Who Think in 1979, The Fifth Generation in 1983, with Ed Fangenbaum,
link |
00:00:17.440
who's considered to be the father of expert systems, The Edge of Chaos, The Features of Women,
link |
00:00:22.960
and many more books. I came across her work in an unusual way by stumbling in a quote from
link |
00:00:28.960
Machines Who Think that is something like, artificial intelligence began with the ancient
link |
00:00:35.280
wish to forge the gods. That was a beautiful way to draw a connecting line between our societal
link |
00:00:41.920
relationship with AI from the grounded day to day science, math, and engineering to popular stories
link |
00:00:48.720
and science fiction and myths of automatons that go back for centuries. Through her literary work,
link |
00:00:55.520
she has spent a lot of time with the seminal figures of artificial intelligence,
link |
00:01:00.400
including the founding fathers of AI from the 1956 Dartmouth summer workshop where the field
link |
00:01:07.760
was launched. I reached out to Pamela for a conversation in hopes of getting a sense of
link |
00:01:13.600
what those early days were like and how their dreams continued to reverberate
link |
00:01:18.400
through the work of our community today. I often don't know where the conversation may take us,
link |
00:01:23.840
but I jump in and see. Having no constraints, rules, or goals is a wonderful way to discover new
link |
00:01:29.600
ideas. This is the Artificial Intelligence Podcast. If you enjoy it, subscribe on YouTube,
link |
00:01:36.320
give it five stars on iTunes, support it on Patreon, or simply connect with me on Twitter
link |
00:01:41.600
at Lex Freedman, spelled F R I D M A N. And now here's my conversation with Pamela McCordick.
link |
00:01:49.680
In 1979, your book, Machines Who Think, was published. In it, you interview some of the early
link |
00:01:58.640
AI pioneers and explore the idea that AI was born not out of maybe math and computer science,
link |
00:02:06.320
but out of myth and legend. So tell me if you could the story of how you first arrived at the
link |
00:02:14.960
book, the journey of beginning to write it. I had been a novelist. I'd published two novels.
link |
00:02:23.120
And I was sitting under the portal at Stanford one day in the house we were renting for the
link |
00:02:31.760
summer. And I thought, I should write a novel about these weird people in AI, I know. And then I
link |
00:02:37.760
thought, ah, don't write a novel, write a history. Simple. Just go around, you know, interview them,
link |
00:02:44.240
splice it together. Voila, instant book. Ha, ha, ha. It was much harder than that.
link |
00:02:50.400
But nobody else was doing it. And so I thought, well, this is a great opportunity. And there were
link |
00:02:59.760
people who, John McCarthy, for example, thought it was a nutty idea. There were much, you know,
link |
00:03:06.240
the field had not evolved yet, so on. And he had some mathematical thing he thought I should write
link |
00:03:11.920
instead. And I said, no, John, I am not a woman in search of a project. I'm, this is what I want
link |
00:03:18.480
to do. I hope you'll cooperate. And he said, oh, mother, mother, well, okay, it's your, your time.
link |
00:03:24.960
What was the pitch for the, I mean, such a young field at that point. How do you write
link |
00:03:31.280
a personal history of a field that's so young? I said, this is wonderful. The founders of the
link |
00:03:37.520
field are alive and kicking and able to talk about what they're doing. Did they sound or feel like
link |
00:03:43.120
founders at the time? Did they know that they've been found, that they've founded something? Oh,
link |
00:03:48.560
yeah, they knew what they were doing was very important, very. What they, what I now see in
link |
00:03:55.120
retrospect is that they were at the height of their research careers. And it's humbling to me
link |
00:04:04.080
that they took time out from all the things that they had to do as a consequence of being there.
link |
00:04:10.320
And to talk to this woman who said, I think I'm going to write a book about you.
link |
00:04:14.560
No, it was amazing, just amazing. So who, who stands out to you? Maybe looking 63 years ago,
link |
00:04:23.840
the Dartmouth conference. So Marvin Minsky was there. McCarthy was there. Claude Shannon,
link |
00:04:31.040
Alan Newell, Herb Simon, some of the folks you've mentioned. Right. Then there's other characters,
link |
00:04:36.960
right? One of your coauthors. He wasn't at Dartmouth. He wasn't at Dartmouth, but I mean.
link |
00:04:44.720
He was a, I think an undergraduate then. And, and of course, Joe Traub. I mean,
link |
00:04:50.880
all of these are players, not at Dartmouth them, but in that era. Right. It's same you and so on.
link |
00:04:58.720
So who are the characters, if you could paint a picture that stand out to you from memory,
link |
00:05:03.680
those people you've interviewed and maybe not people that were just in the,
link |
00:05:08.400
in the, the atmosphere, in the atmosphere. Of course, the four founding fathers were
link |
00:05:13.760
extraordinary guys. They really were. Who are the founding fathers?
link |
00:05:18.560
Alan Newell, Herbert Simon, Marvin Minsky, John McCarthy,
link |
00:05:22.480
they were the four who were not only at the Dartmouth conference,
link |
00:05:26.240
but Newell and Simon arrived there with a working program called the logic theorist.
link |
00:05:31.200
Everybody else had great ideas about how they might do it, but they weren't going to do it yet.
link |
00:05:41.040
And you mentioned Joe Traub, my husband. I was immersed in AI before I met Joe,
link |
00:05:50.080
because I had been Ed Feigenbaum's assistant at Stanford. And before that,
link |
00:05:54.960
I had worked on a book by edited by Feigenbaum and Julian Feldman called
link |
00:06:02.800
Computers and Thought. It was the first textbook of readings of AI. And they, they only did it
link |
00:06:09.200
because they were trying to teach AI to people at Berkeley. And there was nothing, you know,
link |
00:06:13.120
you'd have to send them to this journal and that journal. This was not the internet where you could
link |
00:06:17.600
go look at an article. So I was fascinated from the get go by AI. I was an English major, you know,
link |
00:06:26.080
what did I know? And yet I was fascinated. And that's why you saw that historical,
link |
00:06:33.200
that literary background, which I think is very much a part of the continuum of AI that
link |
00:06:40.320
the AI grew out of that same impulse. Was that, yeah, that traditional? What, what was, what drew
link |
00:06:48.000
you to AI? How did you even think of it back, back then? What, what was the possibilities,
link |
00:06:54.800
the dreams? What was interesting to you? The idea of intelligence outside the human cranium,
link |
00:07:03.200
this was a phenomenal idea. And even when I finished machines who think,
link |
00:07:08.000
I didn't know if they were going to succeed. In fact, the final chapter is very wishy washy,
link |
00:07:15.040
frankly. I don't succeed the field did. Yeah. Yeah. So was there the idea that AI began with
link |
00:07:25.200
the wish to forge the God? So the spiritual component that we crave to create this other
link |
00:07:32.000
thing greater than ourselves? For those guys, I don't think so. Newell and Simon were cognitive
link |
00:07:40.880
psychologists. What they wanted was to simulate aspects of human intelligence. And they found
link |
00:07:49.840
they could do it on the computer. Minsky just thought it was a really cool thing to do.
link |
00:07:57.600
Likewise, McCarthy. McCarthy had got the idea in 1949 when, when he was a Caltech student. And
link |
00:08:08.560
he listened to somebody's lecture. It's in my book, I forget who it was. And he thought,
link |
00:08:15.520
oh, that would be fun to do. How do we do that? And he took a very mathematical approach.
link |
00:08:20.480
Minsky was hybrid. And Newell and Simon were very much cognitive psychology. How can we
link |
00:08:28.800
simulate various things about human cognition? What happened over the many years is, of course,
link |
00:08:37.280
our definition of intelligence expanded tremendously. I mean, these days,
link |
00:08:43.920
biologists are comfortable talking about the intelligence of cell, the intelligence of the
link |
00:08:48.960
brain, not just human brain, but the intelligence of any kind of brain, cephalopause. I mean,
link |
00:08:59.520
an octopus is really intelligent by any, we wouldn't have thought of that in the 60s,
link |
00:09:05.840
even the 70s. So all these things have worked in. And I did hear one behavioral
link |
00:09:12.560
primatologist, Franz Duval, say AI taught us the questions to ask.
link |
00:09:22.800
Yeah, this is what happens, right? It's when you try to build it, is when you start to actually
link |
00:09:27.760
ask questions, if it puts a mirror to ourselves. So you were there in the middle of it. It seems
link |
00:09:35.360
like not many people were asking the questions that you were trying to look at this field,
link |
00:09:41.920
the way you were. I was solo. When I went to get funding for this, because I needed somebody to
link |
00:09:48.480
transcribe the interviews and I needed travel expenses, I went to every thing you could think of,
link |
00:09:59.840
the NSF, the DARPA. There was an Air Force place that doled out money. And each of them said,
link |
00:10:11.840
well, that was very interesting. That's a very interesting idea. But we'll think about it.
link |
00:10:19.200
And the National Science Foundation actually said to me in plain English,
link |
00:10:24.320
hey, you're only a writer. You're not an historian of science. And I said, yeah, that's true. But
link |
00:10:30.960
the historians of science will be crawling all over this field. I'm writing for the general
link |
00:10:35.360
audience. So I thought, and they still wouldn't budge. I finally got a private grant without
link |
00:10:44.000
knowing who it was from from Ed Fredkin at MIT. He was a wealthy man, and he liked what he called
link |
00:10:51.440
crackpot ideas. And he considered this a crackpot idea. This a crackpot idea. And he was willing to
link |
00:10:56.880
support it. I am ever grateful. Let me say that. You know, some would say that a history of science
link |
00:11:04.240
approach to AI, or even just a history or anything like the book that you've written,
link |
00:11:09.360
hasn't been written since. Maybe I'm not familiar. But it's certainly not many.
link |
00:11:16.640
If we think about bigger than just these couple of decades, a few decades, what are the roots
link |
00:11:25.120
of AI? Oh, they go back so far. Yes, of course, there's all the legendary stuff, the
link |
00:11:32.800
Golem and the early robots of the 20th century. But they go back much further than that. If
link |
00:11:42.160
you read Homer, Homer has robots in the Iliad. And a classical scholar was pointing out to me
link |
00:11:50.320
just a few months ago. Well, you said you just read the Odyssey. The Odyssey is full of robots.
link |
00:11:55.440
It is, I said. Yeah, how do you think Odysseus's ship gets from place one place to another? He
link |
00:12:01.520
doesn't have the crew people to do that, the crew men. Yeah, it's magic. It's robots. Oh, I thought.
link |
00:12:09.040
How interesting. So we've had this notion of AI for a long time. And then toward the end of the
link |
00:12:17.680
19th century, the beginning of the 20th century, there were scientists who actually tried to
link |
00:12:24.000
make this happen some way or another, not successfully, they didn't have the technology
link |
00:12:29.280
for it. And of course, Babbage, in the 1850s and 60s, he saw that what he was building was capable
link |
00:12:40.160
of intelligent behavior. And he, when he ran out of funding, the British government finally said,
link |
00:12:47.040
that's enough. He and Lady Lovelace decided, oh, well, why don't we make, you know, why don't we
link |
00:12:53.360
play the ponies with this? He had other ideas for raising money too. But if we actually reach back
link |
00:13:00.560
once again, I think people don't actually really know that robots do appear or ideas of robots.
link |
00:13:07.280
You talk about the Hellenic and the Hebraic points of view. Oh, yes. Can you tell me about each?
link |
00:13:15.040
I defined it this way, the Hellenic point of view is robots are great. You know, they're party help,
link |
00:13:22.480
they help this guy, Hephaestus, this God Hephaestus in his forge. I presume he made them to help him,
link |
00:13:31.200
and so on and so forth. And they welcome the whole idea of robots. The Hebraic view has to do with,
link |
00:13:39.360
I think it's the second commandment, thou shalt not make any graven image. In other words, you
link |
00:13:46.800
better not start imitating humans, because that's just forbidden. It's the second commandment.
link |
00:13:55.520
And a lot of the reaction to artificial intelligence has been a sense that this is
link |
00:14:05.840
this is somehow wicked. This is somehow blasphemous. We shouldn't be going there. Now, you can say,
link |
00:14:16.800
yeah, but there're going to be some downsides. And I say, yes, there are. But blasphemy is not one of
link |
00:14:21.600
them. You know, there's a kind of fear that feels to be almost primal. Is there religious roots to
link |
00:14:29.520
that? Because so much of our society has religious roots. And so there is a feeling of, like you
link |
00:14:36.160
said, blasphemy of creating the other, of creating something, you know, it doesn't have to be artificial
link |
00:14:44.080
intelligence. It's creating life in general. It's the Frankenstein idea. There's the annotated
link |
00:14:50.480
Frankenstein on my coffee table. It's a tremendous novel. It really is just beautifully perceptive.
link |
00:14:58.080
Yes, we do fear this and we have good reason to fear it, but because it can get out of hand.
link |
00:15:06.800
Maybe you can speak to that fear, the psychology, if you thought about it, you know,
link |
00:15:11.360
there's a practical set of fears, concerns in the short term, you can think of, if we actually
link |
00:15:16.160
think about artificial intelligence systems, you can think about bias of discrimination in
link |
00:15:22.720
algorithms or you can think about their social networks, have algorithms that recommend the
link |
00:15:32.160
content you see, thereby these algorithms control the behavior of the masses. There's these concerns.
link |
00:15:38.240
But to me, it feels like the fear that people have is deeper than that. So have you thought about
link |
00:15:45.040
the psychology of it? I think in a superficial way I have. There is this notion that if we
link |
00:15:55.680
produce a machine that can think, it will outthink us and therefore replace us.
link |
00:16:02.000
I guess that's a primal fear of almost kind of a kind of mortality. So around the time you said
link |
00:16:11.840
you worked with Ed Stamford with Ed Faganbaum. So let's look at that one person throughout his
link |
00:16:21.920
history, clearly a key person, one of the many in the history of AI. How has he changed in general
link |
00:16:31.600
around him? How has Stamford changed in the last, how many years are we talking about here?
link |
00:16:36.480
Oh, since 65. So maybe it doesn't have to be about him. It could be bigger, but because he was a
link |
00:16:44.720
key person in expert systems, for example, how are these folks who you've interviewed
link |
00:16:53.040
in the 70s, 79, changed through the decades?
link |
00:16:58.400
In Ed's case, I know him well. We are dear friends. We see each other every month or so.
link |
00:17:11.520
He told me that when machines who think first came out, he really thought all the front
link |
00:17:16.160
matter was kind of baloney. And 10 years later, he said, no, I see what you're getting at. Yes,
link |
00:17:26.000
this is an impulse that has been, this has been a human impulse for thousands of years
link |
00:17:32.160
to create something outside the human cranium that has intelligence.
link |
00:17:41.120
I think it's very hard when you're down at the algorithmic level, and you're just trying to
link |
00:17:47.520
make something work, which is hard enough to step back and think of the big picture.
link |
00:17:53.840
It reminds me of when I was in Santa Fe, I knew a lot of archaeologists, which was a hobby of mine,
link |
00:18:02.800
and I would say, yeah, yeah, well, you can look at the shards and say, oh,
link |
00:18:08.000
this came from this tribe and this came from this trade route and so on. But what about the big
link |
00:18:14.160
picture? And a very distinguished archaeologist said to me, they don't think that way. You do
link |
00:18:21.600
know they're trying to match the shard to the to where it came from. That's, you know, where did
link |
00:18:27.920
this corn, the remainder of this corn come from? Was it grown here? Was it grown elsewhere? And I
link |
00:18:34.480
think this is part of the AI, any scientific field. You're so busy doing the hard work. And it is
link |
00:18:44.560
hard work that you don't step back and say, oh, well, now let's talk about the, you know,
link |
00:18:49.920
the general meaning of all this. Yes. So none of the, even Minsky and McCarthy,
link |
00:18:58.080
they, oh, those guys did. Yeah. The founding fathers did early on or pretty early on. Well,
link |
00:19:04.880
they had, but in a different way from how I looked at it, the two cognitive psychologists,
link |
00:19:11.200
Newell and Simon, they wanted to imagine reforming cognitive psychology so that we would really,
link |
00:19:20.960
really understand the brain. Yeah. Minsky was more speculative. And John McCarthy saw it as,
link |
00:19:32.960
I think I'm doing, doing him right by this. He really saw it as a great boon for human beings to
link |
00:19:40.080
have this technology. And that was reason enough to do it. And he had wonderful, wonderful fables
link |
00:19:50.240
about how if you do the mathematics, you will see that these things are really good for human beings.
link |
00:19:57.920
And if you had a technological objection, he had an answer, a technological answer. But here's how
link |
00:20:05.280
we could get over that. And then blah, blah, blah, blah. And one of his favorite things was
link |
00:20:10.480
what he called the literary problem, which of course, he presented to me several times.
link |
00:20:16.400
That is, everything in literature, there are conventions in literature. One of the conventions
link |
00:20:23.680
is that you have a villain and a hero. And the hero in most literature is human. And the villain
link |
00:20:37.040
in most literature is a machine. And he said, no, that's just not the way it's going to be.
link |
00:20:42.560
But that's the way we're used to it. So when we tell stories about AI, it's always with this
link |
00:20:48.000
paradigm. I thought, yeah, he's right. Looking back, the classics, RUR is certainly the machines
link |
00:20:57.760
trying to overthrow the humans. Frankenstein is different. Frankenstein is a creature.
link |
00:21:08.480
He never has a name. Frankenstein, of course, is the guy who created him, the human Dr. Frankenstein.
link |
00:21:14.560
And this creature wants to be loved, wants to be accepted. And it is only when Frankenstein
link |
00:21:24.720
turns his head, in fact, runs the other way. And the creature is without love
link |
00:21:34.400
that he becomes the monster that he later becomes.
link |
00:21:39.680
So who's the villain in Frankenstein? It's unclear, right?
link |
00:21:43.840
Oh, it is unclear. Yeah.
link |
00:21:45.520
It's really the people who drive him, by driving him away, they bring out the worst.
link |
00:21:54.320
That's right. They give him no human solace. And he is driven away, you're right.
link |
00:22:03.040
He becomes, at one point, the friend of a blind man. And he serves this blind man,
link |
00:22:10.160
and they become very friendly. But when the sighted people of the blind man's family come in,
link |
00:22:18.640
you got a monster here. So it's very didactic in its way. And what I didn't know is that Mary Shelley
link |
00:22:26.000
and Percy Shelley were great readers of the literature surrounding abolition in the United
link |
00:22:33.440
States, the abolition of slavery. And they picked that up wholesale. You are making monsters of
link |
00:22:41.200
these people because you won't give them the respect and love that they deserve.
link |
00:22:46.800
Do you have, if we get philosophical for a second, do you worry that once we create
link |
00:22:54.880
machines that are a little bit more intelligent? Let's look at Roomba, the vacuum cleaner,
link |
00:22:59.840
that this darker part of human nature where we abuse
link |
00:23:07.760
the other, somebody who's different, will come out?
link |
00:23:13.520
I don't worry about it. I could imagine it happening. But I think that what AI has to offer
link |
00:23:22.640
the human race will be so attractive that people will be won over. So you have looked deep into
link |
00:23:32.480
these people, had deep conversations, and it's interesting to get a sense of stories of the
link |
00:23:40.080
way they were thinking and the way it was changed, the way your own thinking about AI has changed.
link |
00:23:44.480
As you mentioned, McCarthy, what about the years at CMU, Carnegie Mellon, with Joe?
link |
00:23:53.360
Sure. Joe was not in AI. He was in algorithmic complexity.
link |
00:24:03.440
Was there always a line between AI and computer science, for example? Is AI its own place of
link |
00:24:09.040
outcasts? Was that the feeling? There was a kind of outcast period for AI.
link |
00:24:15.920
For instance, in 1974, the new field was hardly 10 years old. The new field of computer science
link |
00:24:28.720
was asked by the National Science Foundation, I believe, but it may have been the National
link |
00:24:33.200
Academies, I can't remember, to tell our fellow scientists where computer science is and what
link |
00:24:42.720
it means. And they wanted to leave out AI. And they only agreed to put it in because Don Knuth
link |
00:24:52.880
said, hey, this is important. You can't just leave that out. Really? Don? Don Knuth, yes.
link |
00:24:59.760
I talked to Mr. Nietzsche. Out of all the people. Yes. But you see, an AI person couldn't have made
link |
00:25:06.480
that argument. He wouldn't have been believed, but Knuth was believed. Yes.
link |
00:25:10.880
So Joe Trout worked on the real stuff. Joe was working on algorithmic complexity,
link |
00:25:18.160
but he would say in plain English again and again, the smartest people I know are in AI.
link |
00:25:24.800
Really? Oh, yes. No question. Anyway, Joe loved these guys. What happened was that
link |
00:25:34.080
I guess it was as I started to write machines who think, Herb Simon and I became very close
link |
00:25:40.160
friends. He would walk past our house on Northumberland Street every day after work.
link |
00:25:46.560
And I would just be putting my cover on my typewriter and I would lean out the door and say,
link |
00:25:52.160
Herb, would you like a sherry? And Herb almost always would like a sherry. So he'd stop in
link |
00:25:59.440
and we'd talk for an hour, two hours. My journal says we talked this afternoon for three hours.
link |
00:26:06.720
What was on his mind at the time in terms of on the AI side of things?
link |
00:26:12.160
We didn't talk too much about AI. We talked about other things. Just life.
link |
00:26:15.120
We both love literature and Herb had read Proust in the original French twice all the way through.
link |
00:26:25.280
I can't. I read it in English in translation. So we talked about literature. We talked about
link |
00:26:31.280
languages. We talked about music because he loved music. We talked about art because he was
link |
00:26:37.120
he was actually enough of a painter that he had to give it up because he was afraid it was interfering
link |
00:26:45.840
with his research and so on. So no, it was really just chat chat, but it was very warm.
link |
00:26:54.000
So one summer I said to Herb, you know, my students have all the really interesting
link |
00:26:59.840
conversations. I was teaching at the University of Pittsburgh then in the English department.
link |
00:27:04.480
And, you know, they get to talk about the meaning of life and that kind of thing.
link |
00:27:08.880
And what do I have? I have university meetings where we talk about the photocopying budget and,
link |
00:27:15.200
you know, whether the course on romantic poetry should be one semester or two.
link |
00:27:21.200
So Herb laughed. He said, yes, I know what you mean. He said, but, you know, you could do something
link |
00:27:25.760
about that. Dot, that was his wife, Dot and I used to have a salon at the University of Chicago every
link |
00:27:33.920
Sunday night. And we would have essentially an open house. And people knew it wasn't for a small
link |
00:27:42.400
talk. It was really for some topic of depth. He said, but my advice would be that you choose
link |
00:27:51.440
the topic ahead of time. Fine, I said. So the following, we exchanged mail over the summer.
link |
00:27:59.200
That was US post in those days because you didn't have personal email. And I decided I would organize
link |
00:28:09.120
it. And there would be eight of us, Alan Nolan, his wife, Herb Simon, and his wife, Dorothea.
link |
00:28:16.880
There was a novelist in town, a man named Mark Harris. He had just arrived and his wife, Josephine.
link |
00:28:27.840
Mark was most famous then for a novel called Bang the Drum Slowly, which was about baseball.
link |
00:28:34.160
And Joe and me, so eight people. And we met monthly and we just sank our teeth into really
link |
00:28:43.360
hard topics. And it was great fun. How have your own views around artificial intelligence changed
link |
00:28:53.200
in through the process of writing machines who think and afterwards the ripple effects?
link |
00:28:58.240
I was a little skeptical that this whole thing would work out. It didn't matter. To me, it was
link |
00:29:04.400
so audacious. This whole thing being AI generally. And in some ways, it hasn't worked out the way I
link |
00:29:16.160
expected so far. That is to say, there is this wonderful lot of apps, thanks to deep learning
link |
00:29:25.760
and so on. But those are algorithmic. And in the part of symbolic processing,
link |
00:29:36.640
there is very little yet. And that's a field that lies waiting for industrious graduate students.
link |
00:29:46.800
Maybe you can tell me some figures that popped up in your life in the 80s with expert systems,
link |
00:29:53.040
where there was the symbolic AI possibilities of what most people think of as AI. If you dream
link |
00:30:01.840
of the possibilities of AI, it's really expert systems. And those hit a few walls and there
link |
00:30:08.000
were challenges there. And I think, yes, they will reemerge again with some new breakthroughs and so
link |
00:30:12.960
on. But what did that feel like, both the possibility and the winter that followed, the
link |
00:30:18.640
slowdown in research? This whole thing about AI winter is, to me, a crock.
link |
00:30:26.160
It's no winters. Because I look at the basic research that was being done in the 80s, which is
link |
00:30:33.200
supposed to be, my God, it was really important. It was laying down things that nobody had thought
link |
00:30:39.520
about before. But it was basic research. You couldn't monetize it. Hence the winter.
link |
00:30:44.880
Science research goes and fits and starts. It isn't this nice, smooth,
link |
00:30:54.240
oh, this follows this, follows this. No, it just doesn't work that way.
link |
00:30:59.200
Well, the interesting thing, the way winters happen, it's never the fault of the researchers.
link |
00:31:04.480
It's the some source of hype, over promising. Well, no, let me take that back. Sometimes it
link |
00:31:11.920
is the fault of the researchers. Sometimes certain researchers might overpromise the
link |
00:31:17.200
possibilities. They themselves believe that we're just a few years away, sort of just recently talked
link |
00:31:23.760
to Elon Musk and he believes he'll have an autonomous vehicle in a year and he believes it.
link |
00:31:30.240
A year? A year, yeah, would have mass deployment of a time.
link |
00:31:33.520
For the record, this is 2019 right now. So he's talking 2020.
link |
00:31:38.640
To do the impossible, you really have to believe it. And I think what's going to happen when you
link |
00:31:44.800
believe it, because there's a lot of really brilliant people around him, is some good stuff
link |
00:31:49.520
will come out of it. Some unexpected brilliant breakthroughs will come out of it. When you
link |
00:31:54.640
really believe it, when you work that hard. I believe that and I believe autonomous vehicles
link |
00:31:59.520
will come. I just don't believe it'll be in a year. I wish. But nevertheless, there's
link |
00:32:05.280
autonomous vehicles is a good example. There's a feeling many companies have promised by 2021,
link |
00:32:11.680
by 2022 for GM. Basically, every single automotive company has promised they'll
link |
00:32:18.000
have autonomous vehicles. So that kind of overpromise is what leads to the winter.
link |
00:32:23.040
Because we'll come to those dates, there won't be autonomous vehicles, and there'll be a feeling,
link |
00:32:28.960
well, wait a minute, if we took your word at that time, that means we just spent billions of
link |
00:32:34.160
dollars, had made no money. And there's a counter response to where everybody gives up on it.
link |
00:32:41.600
Sort of intellectually, at every level, the hope just dies. And all that's left is a few basic
link |
00:32:49.600
researchers. So you're uncomfortable with some aspects of this idea. Well, it's the difference
link |
00:32:56.800
between science and commerce. So you think science, science goes on the way it does?
link |
00:33:06.480
Science can really be killed by not getting proper funding or timely funding. I think
link |
00:33:14.800
Great Britain was a perfect example of that. The Lighthill report in the 1960s.
link |
00:33:22.080
The year essentially said, there's no use of Great Britain putting any money into this.
link |
00:33:27.360
It's going nowhere. And this was all about social factions in Great Britain.
link |
00:33:36.960
Edinburgh hated Cambridge, and Cambridge hated Manchester, and somebody else can write that
link |
00:33:44.400
story. But it really did have a hard effect on research there. Now, they've come roaring back
link |
00:33:53.760
with deep mind. But that's one guy and his visionaries around him.
link |
00:34:01.360
But just to push on that, it's kind of interesting, you have this dislike of the idea of an AI winter.
link |
00:34:08.320
Where's that coming from? Where were you? Oh, because I just don't think it's true.
link |
00:34:16.560
There was a particular period of time. It's a romantic notion, certainly.
link |
00:34:21.360
Yeah, well, I admire science, perhaps more than I admire commerce. Commerce is fine. Hey,
link |
00:34:32.960
you know, we all got to live. But science has a much longer view than commerce,
link |
00:34:44.080
and continues almost regardless. It can't continue totally regardless, but it almost
link |
00:34:54.000
regardless of what's saleable and what's not, what's monetizable and what's not.
link |
00:34:59.600
So the winter is just something that happens on the commerce side, and the science marches.
link |
00:35:07.200
That's a beautifully optimistic inspired message. I agree with you. I think
link |
00:35:13.760
if we look at the key people that work in AI, they work in key scientists in most disciplines,
link |
00:35:19.440
they continue working out of the love for science. You can always scrape up some funding
link |
00:35:25.360
to stay alive, and they continue working diligently. But there certainly is a huge
link |
00:35:33.120
amount of funding now, and there's a concern on the AI side and deep learning. There's a concern
link |
00:35:39.840
that we might, with over promising, hit another slowdown in funding, which does affect the number
link |
00:35:46.160
of students, you know, that kind of thing. Yeah, it does. So the kind of ideas you had
link |
00:35:51.280
in machines who think, did you continue that curiosity through the decades that followed?
link |
00:35:56.400
Yes, I did. And what was your view, historical view of how AI community evolved, the conversations
link |
00:36:04.800
about it, the work? Has it persisted the same way from its birth? No, of course not. It's just
link |
00:36:11.520
we were just talking. The symbolic AI really kind of dried up and it all became algorithmic.
link |
00:36:22.400
I remember a young AI student telling me what he was doing, and I had been away from the field
link |
00:36:29.520
long enough. I'd gotten involved with complexity at the Santa Fe Institute.
link |
00:36:34.080
I thought, algorithms, yeah, they're in the service of, but they're not the main event.
link |
00:36:41.680
No, they became the main event. That surprised me. And we all know the downside of this. We
link |
00:36:49.200
all know that if you're using an algorithm to make decisions based on a gazillion human decisions
link |
00:36:58.240
baked into it, are all the mistakes that humans make, the bigotries, the short sightedness,
link |
00:37:06.000
so on and so on. So you mentioned Santa Fe Institute. So you've written the novel Edge
link |
00:37:14.000
of Chaos, but it's inspired by the ideas of complexity, a lot of which have been extensively
link |
00:37:21.200
explored at the Santa Fe Institute. It's another fascinating topic of just sort of
link |
00:37:31.040
emergent complexity from chaos. Nobody knows how it happens really, but it seems to wear
link |
00:37:37.600
all the interesting stuff that does happen. So how do first, not your novel, but just
link |
00:37:44.160
complexity in general in the work at Santa Fe fit into the bigger puzzle of the history of AI?
link |
00:37:49.520
Or it may be even your personal journey through that.
link |
00:37:54.480
One of the last projects I did concerning AI in particular was looking at the work of
link |
00:38:03.040
Harold Cohen, the painter. And Harold was deeply involved with AI. He was a painter first.
link |
00:38:12.960
And what his project, Aaron, which was a lifelong project, did, was reflect his own cognitive
link |
00:38:27.600
processes. Okay. Harold and I, even though I wrote a book about it, we had a lot of friction between
link |
00:38:34.800
us. And I went, I thought, this is it, you know, the book died. It was published and fell into a
link |
00:38:44.560
ditch. This is it. I'm finished. It's time for me to do something different. By chance,
link |
00:38:53.040
this was a sabbatical year for my husband. And we spent two months at the Santa Fe Institute
link |
00:38:59.280
and two months at Caltech. And then the spring semester in Munich, Germany. Okay. Those two
link |
00:39:09.200
months at the Santa Fe Institute were so restorative for me. And I began to, the institute was very
link |
00:39:19.520
small then. It was in some kind of office complex on old Santa Fe trail. Everybody kept their door
link |
00:39:26.240
open. So you could crack your head on a problem. And if you finally didn't get it, you could walk
link |
00:39:33.440
in to see Stuart Kaufman or any number of people and say, I don't get this. Can you explain?
link |
00:39:43.680
And one of the people that I was talking to about complex adaptive systems was Murray Gellemann.
link |
00:39:51.120
And I told Murray what Harold Cohen had done. And I said, you know, this sounds to me
link |
00:39:58.960
like a complex adaptive system. And he said, yeah, it is. Well, what do you know? Harold's
link |
00:40:06.080
Aaron had all these kissing cousins all over the world in science and in economics and so on and
link |
00:40:12.560
so forth. I was so relieved. I thought, okay, your instincts are okay. You're doing the right thing. I
link |
00:40:21.200
didn't have the vocabulary. And that was one of the things that the Santa Fe Institute gave me.
link |
00:40:25.920
If I could have rewritten that book, no, it had just come out. I couldn't rewrite it. I would have
link |
00:40:31.040
had a vocabulary to explain what Aaron was doing. Okay. So I got really interested in
link |
00:40:37.680
what was going on at the Institute. The people were again, bright and funny and willing to explain
link |
00:40:47.440
anything to this amateur. George Cowan, who was then the head of the Institute, said he thought it
link |
00:40:54.800
might be a nice idea if I wrote a book about the Institute. And I thought about it. And I had my
link |
00:41:02.160
eye on some other project. God knows what. And I said, oh, I'm sorry, George. Yeah, I'd really love
link |
00:41:08.960
to do it. But, you know, just not going to work for me at this moment. And he said, oh, too bad.
link |
00:41:13.840
I think it would make an interesting book. Well, he was right and I was wrong. I wish I'd done it.
link |
00:41:18.560
But that's interesting. I hadn't thought about that, that that was a road not taken that I wish
link |
00:41:24.080
I'd taken. Well, you know what? That's just on that point. It's quite brave for you as a writer,
link |
00:41:32.400
as sort of coming from a world of literature, the literary thinking and historical thinking. I mean,
link |
00:41:39.680
just from that world and bravely talking to quite, I assume, large egos in AI or in complexity and so
link |
00:41:52.640
on. How'd you do it? Like, where did you? I mean, I suppose they could be intimidated of you as well,
link |
00:42:00.560
because it's two different worlds. I never picked up that anybody was intimidated by me.
link |
00:42:06.160
But how were you brave enough? Where did you find the guts to just dumb, dumb luck? I mean,
link |
00:42:11.680
this is an interesting rock to turn over. I'm going to write a book about and you know,
link |
00:42:16.160
people have enough patience with writers, if they think they're going to end up at a book
link |
00:42:21.840
that they let you flail around and so on. It's well, but they also look if the writer has.
link |
00:42:27.840
There's like, if there's a sparkle in their eye, if they get it. Yeah, sure. Right. When were you
link |
00:42:33.440
at the Santa Fe Institute? The time I'm talking about is 1990. Yeah, 1990, 1991, 1992. But we then,
link |
00:42:44.480
because Joe was an external faculty member, we're in Santa Fe every summer, we bought a house there.
link |
00:42:49.920
And I didn't have that much to do with the Institute anymore. I was writing my novels,
link |
00:42:55.600
I was doing whatever I was doing. But I loved the Institute and I loved
link |
00:43:06.720
the, again, the audacity of the ideas. That really appeals to me.
link |
00:43:12.960
I think that there's this feeling, much like in great institutes of neuroscience, for example,
link |
00:43:23.040
that they're in it for the long game of understanding something fundamental about
link |
00:43:29.840
reality and nature. And that's really exciting. So if we start not to look a little bit more recently,
link |
00:43:36.800
how AI is really popular today. How is this world, you mentioned algorithmic, but in general,
link |
00:43:50.080
is the spirit of the people, the kind of conversations you hear through the grapevine
link |
00:43:54.320
and so on, is that different than the roots that you remember? No, the same kind of excitement,
link |
00:44:00.160
the same kind of, this is really going to make a difference in the world. And it will, it has.
link |
00:44:07.120
A lot of folks, especially young, 20 years old or something, they think we've just found something
link |
00:44:14.080
special here. We're going to change the world tomorrow. On a time scale, do you have
link |
00:44:22.000
a sense of what, of the time scale at which breakthroughs in AI happen?
link |
00:44:27.120
I really don't, because look at deep learning. That was, Jeffrey Hinton came up with the algorithm
link |
00:44:39.920
in 86. But it took all these years for the technology to be good enough to actually
link |
00:44:48.960
be applicable. So no, I can't predict that at all. I can't, I wouldn't even try.
link |
00:44:58.320
Well, let me ask you to, not to try to predict, but to speak to the, I'm sure in the 60s,
link |
00:45:06.000
as it continues now, there's people that think, let's call it, we can call it this
link |
00:45:11.040
fun word, the singularity. When there's a phase shift, there's some profound feeling where
link |
00:45:17.120
we're all really surprised by what's able to be achieved. I'm sure those dreams are there.
link |
00:45:23.040
I remember reading quotes in the 60s and those continued. How have your own views,
link |
00:45:29.200
maybe if you look back, about the timeline of a singularity changed?
link |
00:45:37.040
Well, I'm not a big fan of the singularity as Ray Kurzweil has presented it.
link |
00:45:45.760
But how would you define the Ray Kurzweil? How would you, how do you think of singularity in
link |
00:45:52.480
those? If I understand Kurzweil's view, it's sort of, there's going to be this moment when
link |
00:45:58.880
machines are smarter than humans and, you know, game over. However, the game over is,
link |
00:46:06.320
I mean, do they put us on a reservation? Do they, et cetera, et cetera. And
link |
00:46:10.800
first of all, machines are smarter than humans in some ways, all over the place. And they have been
link |
00:46:19.840
since adding machines were invented. So it's not, it's not going to come like some great
link |
00:46:27.280
edible crossroads, you know, where they meet each other and our offspring, Oedipus says,
link |
00:46:34.320
you're dead. It's just not going to happen. Yeah. So it's already game over with calculators,
link |
00:46:41.040
right? They're already out to do much better at basic arithmetic than us. But, you know,
link |
00:46:47.920
there's a human like intelligence. And that's not the ones that destroy us. But, you know,
link |
00:46:55.840
somebody that you can have as a, as a friend, you can have deep connections with that kind of
link |
00:47:01.520
passing the Turing test and beyond, those kinds of ideas. Have you dreamt of those?
link |
00:47:07.680
Oh, yes, yes, yes.
link |
00:47:08.880
Those possibilities.
link |
00:47:10.160
In a book I wrote with Ed Feigenbaum, there's a little story called the geriatric robot. And
link |
00:47:18.880
how I came up with the geriatric robot is a story in itself. But here's, here's what the
link |
00:47:24.240
geriatric robot does. It doesn't just clean you up and feed you and will you out into the sun.
link |
00:47:29.520
It's great advantages. It listens. It says, tell me again about the great coup of 73.
link |
00:47:43.520
Tell me again about how awful or how wonderful your grandchildren are and so on and so forth.
link |
00:47:53.040
And it isn't hanging around to inherit your money. It isn't hanging around because it can't get
link |
00:47:59.440
any other job. This is its job and so on and so forth. Well, I would love something like that.
link |
00:48:09.120
Yeah. I mean, for me, that deeply excites me. So I think there's a lot of us.
link |
00:48:15.600
Lex, you got to know it was a joke. I dreamed it up because I needed to talk to college students
link |
00:48:20.880
and I needed to give them some idea of what AI might be. And they were rolling in the aisles
link |
00:48:26.880
as I elaborated and elaborated and elaborated. When it went into the book,
link |
00:48:34.320
they took my hide off in the New York Review of Books. This is just what we've thought about
link |
00:48:40.240
these people in AI. They're inhuman. Oh, come on. Get over it.
link |
00:48:45.120
Don't you think that's a good thing for the world that AI could potentially...
link |
00:48:49.120
Why? I do. Absolutely. And furthermore, I want... I'm pushing 80 now. By the time I need help
link |
00:48:59.360
like that, I also want it to roll itself in a corner and shut the fuck up.
link |
00:49:06.960
Let me linger on that point. Do you really, though?
link |
00:49:09.920
Yeah, I do. Here's what.
link |
00:49:11.040
But you wanted to push back a little bit a little.
link |
00:49:15.120
But I have watched my friends go through the whole issue around having help in the house.
link |
00:49:22.480
And some of them have been very lucky and had fabulous help. And some of them have had people
link |
00:49:29.760
in the house who want to keep the television going on all day, who want to talk on their phones all day.
link |
00:49:35.760
No. So basically... Just roll yourself in the corner.
link |
00:49:38.960
Unfortunately, us humans, when we're assistants, we care... We're still...
link |
00:49:45.760
Even when we're assisting others, we care about ourselves more.
link |
00:49:48.400
Of course.
link |
00:49:49.280
And so you create more frustration. And a robot AI assistant can really optimize
link |
00:49:57.360
the experience for you. I was just speaking to the point... You actually bring up a very,
link |
00:50:03.040
very good point. But I was speaking to the fact that us humans are a little complicated,
link |
00:50:07.120
that we don't necessarily want a perfect servant. I don't... Maybe you disagree with that.
link |
00:50:14.560
But there's... I think there's a push and pull with humans.
link |
00:50:22.240
A little tension, a little mystery that, of course, that's really difficult for you to get right.
link |
00:50:28.080
But I do sense, especially in today with social media, that people are getting more and more
link |
00:50:35.120
lonely, even young folks. And sometimes, especially young folks, that loneliness,
link |
00:50:42.000
there's a longing for connection. And AI can help alleviate some of that loneliness.
link |
00:50:48.560
Some. Just somebody who listens. Like in person.
link |
00:50:54.640
That...
link |
00:50:54.960
So to speak.
link |
00:50:56.240
So to speak, yeah. So to speak.
link |
00:51:00.080
Yeah, that to me is really exciting. But so if we look at that level of intelligence,
link |
00:51:07.120
which is exceptionally difficult to achieve, actually, as the singularity, or whatever,
link |
00:51:12.560
that's the human level bar, that people have dreamt of that too. Touring dreamt of it.
link |
00:51:19.520
He had a date timeline. Do you have how of your own timeline evolved on past time?
link |
00:51:26.320
I don't even think about it.
link |
00:51:27.520
You don't even think?
link |
00:51:28.240
No. Just this field has been so full of surprises for me.
link |
00:51:35.520
That you're just taking in and see a fun bunch of basic science?
link |
00:51:39.120
Yeah, I just can't. Maybe that's because I've been around the field long enough to think,
link |
00:51:45.840
you know, don't go that way. Herb Simon was terrible about making these predictions of
link |
00:51:52.240
when this and that would happen.
link |
00:51:53.840
Right.
link |
00:51:54.320
And he was a sensible guy.
link |
00:51:58.400
Yeah. And his quotes are often used, right, as a...
link |
00:52:01.600
That's a legend, yeah.
link |
00:52:02.880
Yeah. Do you have concerns about AI, the existential threats that many people like Elon Musk and
link |
00:52:13.840
Sam Harris and others that are thinking about?
link |
00:52:16.160
Oh, yeah, yeah. That takes up a half a chapter in my book.
link |
00:52:21.200
I call it the male gaze.
link |
00:52:27.120
Well, you hear me out. The male gaze is actually a term from film criticism.
link |
00:52:36.240
And I'm blocking on the woman who dreamed this up.
link |
00:52:41.280
But she pointed out how most movies were made from the male point of view, that women were
link |
00:52:48.880
objects, not subjects. They didn't have any agency and so on and so forth.
link |
00:52:56.080
So when Elon and his pals, Hawking and so on,
link |
00:53:00.560
okay, AI is going to eat our lunch and our dinner and our midnight snack too,
link |
00:53:06.640
I thought, what? And I said to Ed Feigenbaum, oh, this is the first guy.
link |
00:53:11.600
First, these guys have always been the smartest guy on the block.
link |
00:53:14.880
And here comes something that might be smarter. Ooh, let's stamp it out before it takes over.
link |
00:53:20.880
And Ed laughed. He said, I didn't think about it that way.
link |
00:53:24.080
But I did. I did. And it is the male gaze.
link |
00:53:32.000
Okay, suppose these things do have agency. Well, let's wait and see what happens.
link |
00:53:37.120
Can we imbue them with ethics? Can we imbue them with a sense of empathy?
link |
00:53:48.560
Or are they just going to be, I don't know, we've had centuries of guys like that?
link |
00:53:55.760
That's interesting that the ego, the male gaze is immediately threatened.
link |
00:54:03.600
And so you can't think in a patient, calm way of how the tech could evolve.
link |
00:54:13.280
Speaking of which, here in 96 book, The Future of Women, I think at the time and now, certainly now,
link |
00:54:21.520
I mean, I'm sorry, maybe at the time, but I'm more cognizant of now, is extremely relevant.
link |
00:54:27.760
And you and Nancy Ramsey talk about four possible futures of women in science and tech.
link |
00:54:35.120
So if we look at the decades before and after the book was released, can you tell a history,
link |
00:54:43.120
sorry, of women in science and tech and how it has evolved? How have things changed? Where do we
link |
00:54:50.560
stand? Not enough. They have not changed enough. The way that women are ground down in computing is
link |
00:55:02.320
simply unbelievable. But what are the four possible futures for women in tech from the book?
link |
00:55:10.800
What you're really looking at are various aspects of the present. So for each of those,
link |
00:55:16.720
you could say, oh, yeah, we do have backlash. Look at what's happening with abortion and so on and so
link |
00:55:22.800
forth. We have one step forward, one step back. The golden age of equality was the hardest chapter
link |
00:55:31.280
to write. And I used something from the Santa Fe Institute, which is the sand pile effect,
link |
00:55:37.760
that you drop sand very slowly onto a pile, and it grows and it grows and it grows until
link |
00:55:44.480
suddenly it just breaks apart. And in a way, MeToo has done that. That was the last drop of sand
link |
00:55:56.640
that broke everything apart. That was a perfect example of the sand pile effect. And that made
link |
00:56:02.800
me feel good. It didn't change all of society, but it really woke a lot of people up.
link |
00:56:07.440
But are you in general optimistic about maybe after MeToo? MeToo is about a very specific kind
link |
00:56:15.760
of thing. Boy, solve that and you'll solve everything. But are you in general optimistic
link |
00:56:21.920
about the future? Yes, I'm a congenital optimistic. I can't help it.
link |
00:56:28.400
What about AI? What are your thoughts about the future of AI? Of course, I get asked,
link |
00:56:35.600
what do you worry about? And the one thing I worry about is the things we can't anticipate.
link |
00:56:44.320
There's going to be something out of that field that we will just say,
link |
00:56:47.440
we weren't prepared for that. I am generally optimistic. When I first took up being interested
link |
00:56:59.040
in AI, like most people in the field, more intelligence was like more virtue. What could be
link |
00:57:06.560
bad? And in a way, I still believe that, but I realize that my notion of intelligence has
link |
00:57:15.760
broadened. There are many kinds of intelligence, and we need to imbue our machines with those many
link |
00:57:22.240
kinds. So you've now just finished or in the process of finishing the book, even working on
link |
00:57:32.720
the memoir. How have you changed? I know it's just writing, but how have you changed the process?
link |
00:57:40.800
If you look back, what kind of stuff did it bring up to you that surprised you looking at the entirety
link |
00:57:48.880
of it all? The biggest thing, and it really wasn't a surprise, is how lucky I was, oh my, to be,
link |
00:58:03.680
to have access to the beginning of a scientific field that is going to change the world.
link |
00:58:08.880
How did I luck out? And yes, of course, my view of things has widened a lot.
link |
00:58:23.040
If I can get back to one feminist part of our conversation, without knowing it, it really
link |
00:58:32.080
was subconscious. I wanted AI to succeed because I was so tired of hearing that intelligence was
link |
00:58:40.400
inside the male cranium. And I thought if there was something out there that wasn't a male
link |
00:58:49.360
thinking and doing well, then that would put a lie to this whole notion of intelligence resides
link |
00:58:57.120
in the male cranium. I did not know that until one night, Harold Cohen and I were
link |
00:59:05.760
having a glass of wine, maybe two, and he said, what drew you to AI? And I said, oh, you know,
link |
00:59:12.560
smartest people I knew, great project, blah, blah, blah. And I said, and I wanted something
link |
00:59:18.080
besides male smarts. And it just bubbled up out of me, Lex. It's brilliant, actually. So AI really
link |
00:59:32.160
humbles all of us and humbles the people that need to be humbled the most.
link |
00:59:37.040
Let's hope. Oh, wow, that is so beautiful. Pamela, thank you so much for talking to
link |
00:59:43.440
us. Oh, it's been a great pleasure. Thank you.