back to index

Peter Singer: Suffering in Humans, Animals, and AI | Lex Fridman Podcast #107


small model | large model

link |
00:00:00.000
The following is a conversation with Peter Singer,
link |
00:00:03.440
professor of bioethics and personal university,
link |
00:00:06.200
best known for his 1975 book, Animal Liberation,
link |
00:00:10.280
that makes an ethical case against eating meat.
link |
00:00:14.240
He has written brilliantly from an ethical perspective
link |
00:00:17.680
on extreme poverty, euthanasia, human genetic selection,
link |
00:00:21.480
sports doping, the sale of kidneys,
link |
00:00:23.720
and generally happiness,
link |
00:00:26.880
including in his books, Ethics in the Real World,
link |
00:00:30.200
and The Life You Can Save.
link |
00:00:32.960
He was a key popularizer of the effective altruism movement
link |
00:00:36.320
and is generally considered
link |
00:00:37.800
one of the most influential philosophers in the world.
link |
00:00:42.200
Quick summary of the ads.
link |
00:00:43.760
Two sponsors, Cash App and Masterclass.
link |
00:00:47.080
Please consider supporting the podcast
link |
00:00:48.840
by downloading Cash App and using code LEX Podcast
link |
00:00:52.240
and signing up at masterclass.com slash LEX.
link |
00:00:55.920
Click the links by the stuff.
link |
00:00:57.840
It really is the best way to support the podcast
link |
00:01:00.080
and the journey I'm on.
link |
00:01:02.480
As you may know, I primarily eat a ketogenic
link |
00:01:05.920
or carnivore diet,
link |
00:01:07.520
which means that most of my diet is made up of meat.
link |
00:01:10.400
I do not hunt the food I eat,
link |
00:01:12.600
though one day I hope to.
link |
00:01:15.360
I love fishing, for example.
link |
00:01:17.800
Fishing and eating the fish I catch has always felt
link |
00:01:21.000
much more honest than participating
link |
00:01:23.640
in the supply chain of factory farming.
link |
00:01:26.440
From an ethics perspective,
link |
00:01:28.400
this part of my life has always had a cloud over it.
link |
00:01:31.960
It makes me think.
link |
00:01:33.680
I've tried a few times in my life
link |
00:01:35.960
to reduce the amount of meat I eat,
link |
00:01:37.960
but for some reason, whatever the makeup of my body,
link |
00:01:41.280
whatever the way I practice the dieting I have,
link |
00:01:44.120
I get a lot of mental and physical energy
link |
00:01:48.040
and performance from eating meat.
link |
00:01:50.640
So both intellectually and physically,
link |
00:01:54.040
it's a continued journey for me.
link |
00:01:56.200
I return to Peter's work often to reevaluate the ethics
link |
00:02:00.400
of how I live this aspect of my life.
link |
00:02:03.400
Let me also say that you may be a vegan
link |
00:02:06.200
or you may be a meat eater
link |
00:02:07.840
and may be upset by the words I say or Peter says,
link |
00:02:11.440
but I ask for this podcast
link |
00:02:13.800
and other episodes of this podcast
link |
00:02:16.080
that you keep an open mind.
link |
00:02:18.280
I may and probably will talk with people you disagree with.
link |
00:02:21.640
Please try to really listen,
link |
00:02:24.800
especially to people you disagree with
link |
00:02:27.400
and give me and the world the gift
link |
00:02:29.800
of being a participant in a patient, intelligent
link |
00:02:33.080
and nuanced discourse.
link |
00:02:34.840
If your instinct and desire is to be a voice of mockery
link |
00:02:38.640
towards those you disagree with, please unsubscribe.
link |
00:02:42.520
My source of joy and inspiration here
link |
00:02:44.840
has been to be a part of a community
link |
00:02:46.840
that thinks deeply and speaks with empathy and compassion.
link |
00:02:51.000
That is what I hope to continue being a part of
link |
00:02:53.840
and I hope you join as well.
link |
00:02:56.160
If you enjoy this podcast, subscribe on YouTube,
link |
00:02:58.920
review it with 5 Stars on Apple Podcast,
link |
00:03:01.320
follow on Spotify, support on Patreon
link |
00:03:04.240
or connect with me on Twitter at Lex Freedman.
link |
00:03:07.880
As usual, I'll do a few minutes of ads now
link |
00:03:09.920
and never any ads in the middle
link |
00:03:11.280
that can break the flow of the conversation.
link |
00:03:14.000
This show is presented by Cash App,
link |
00:03:16.520
the number one finance app in the App Store.
link |
00:03:18.920
When you get it, use code LEX Podcast.
link |
00:03:21.960
Cash App lets you send money to friends by Bitcoin
link |
00:03:25.200
and invest in the stock market with as little as $1.
link |
00:03:29.480
Since Cash App allows you to buy Bitcoin,
link |
00:03:31.760
let me mention that cryptocurrency in the context
link |
00:03:34.560
of the history of money is fascinating.
link |
00:03:37.360
I recommend Ascent of Money as a great book in this history.
link |
00:03:41.440
Debits and credits on ledgers started around 30,000 years ago.
link |
00:03:45.920
The US dollar created over 200 years ago
link |
00:03:48.520
and the first decentralized cryptocurrency
link |
00:03:51.040
released just over 10 years ago.
link |
00:03:53.720
So given that history,
link |
00:03:55.000
cryptocurrency is still very much
link |
00:03:56.960
in its early days of development,
link |
00:03:58.680
but it's still aiming to and just might redefine
link |
00:04:02.080
the nature of money.
link |
00:04:04.280
So again, if you get Cash App from the App Store
link |
00:04:06.960
or Google Play and use the code LEX Podcast,
link |
00:04:10.440
you get $10 and Cash App will also donate $10 to first,
link |
00:04:14.840
an organization that is helping to advance robotic system
link |
00:04:17.520
education for young people around the world.
link |
00:04:20.840
This show is sponsored by Masterclass.
link |
00:04:23.400
Sign up at masterclass.com slash LEX to get a discount
link |
00:04:27.080
and to support this podcast.
link |
00:04:29.600
When I first heard about Masterclass,
link |
00:04:31.280
I thought it was too good to be true.
link |
00:04:33.120
For $180 a year, you get an all access pass
link |
00:04:36.640
to watch courses from to list some of my favorites,
link |
00:04:40.360
Chris Hadfield on Space Exploration,
link |
00:04:42.880
Nielugas Tyson on Scientific Thinking and Communication,
link |
00:04:46.160
Will Wright, creator of SimCity and Sims on Game Design.
link |
00:04:50.360
I promise I'll start streaming games at some point soon.
link |
00:04:53.840
Carlos Santana on Guitar,
link |
00:04:55.800
Gary Kasparov on Chess, Daniel Lagrano on Poker,
link |
00:04:59.720
and many more.
link |
00:05:01.560
Chris Hadfield explaining how rockets work
link |
00:05:04.200
and the experience of being launched into space alone
link |
00:05:07.240
is worth the money.
link |
00:05:08.680
By the way, you can watch it on basically any device.
link |
00:05:12.760
Once again, sign up at masterclass.com slash LEX
link |
00:05:16.560
to get a discount and to support this podcast.
link |
00:05:20.200
And now here's my conversation with Peter Singer.
link |
00:05:25.000
When did you first become conscious of the fact
link |
00:05:27.560
that there is much suffering in the world?
link |
00:05:32.200
I think I was conscious of the fact
link |
00:05:33.680
that there's a lot of suffering in the world
link |
00:05:35.680
pretty much as soon as I was able to understand
link |
00:05:38.440
anything about my family and its background
link |
00:05:40.880
because I lost three of my four grandparents
link |
00:05:44.640
in the Holocaust.
link |
00:05:45.640
And obviously I knew why I only had one grandparent
link |
00:05:52.040
and she herself had been in the camps and survived.
link |
00:05:54.480
So I think I knew a lot about that pretty early.
link |
00:05:58.000
My entire family comes from the Soviet Union.
link |
00:06:01.120
I was born in the Soviet Union.
link |
00:06:03.600
Sort of World War II has deep roots in the culture
link |
00:06:07.800
and the suffering that the war brought the millions
link |
00:06:11.080
of people who died is in the music,
link |
00:06:13.920
is in the literature, is in the culture.
link |
00:06:16.800
What do you think was the impact
link |
00:06:18.880
of the war broadly on our society?
link |
00:06:24.960
The war had many impacts.
link |
00:06:28.080
I think one of them, a beneficial impact
link |
00:06:31.360
is that it showed what racism
link |
00:06:34.240
and authoritarian government can do.
link |
00:06:37.920
And at least as far as the West was concerned,
link |
00:06:41.040
I think that meant that I grew up in an era
link |
00:06:43.160
in which there wasn't the kind of overt racism
link |
00:06:48.000
and anti semitism that had existed for my parents
link |
00:06:51.760
in Europe, I was growing up in Australia.
link |
00:06:53.800
And certainly that was clearly seen
link |
00:06:57.560
as something completely unacceptable.
link |
00:06:59.400
There was also a fear of a further outbreak of war,
link |
00:07:04.520
which this time we expected would be nuclear
link |
00:07:07.720
because of the way the Second World War had ended.
link |
00:07:10.400
So there was this overshadowing of my childhood
link |
00:07:15.400
about the possibility that I would not live to grow up
link |
00:07:18.680
and be an adult because of a catastrophic nuclear war.
link |
00:07:23.680
The film on the beach was made in which the city
link |
00:07:27.680
that I was living, Melbourne, was the last place on earth
link |
00:07:30.840
to have living human beings because of the nuclear cloud
link |
00:07:35.040
that was spreading from the North.
link |
00:07:37.800
So that certainly gave us a bit of that sense.
link |
00:07:41.800
There were many, there were clearly many other legacies
link |
00:07:44.280
that we got of the war as well
link |
00:07:46.280
and the whole setup of the world
link |
00:07:48.240
and the Cold War that followed.
link |
00:07:50.480
All of that has its roots in the Second World War.
link |
00:07:53.840
You know, there is much beauty that comes from war.
link |
00:07:56.400
Sort of, I had a conversation with Eric Weinstein.
link |
00:08:00.080
He said, everything is great about war
link |
00:08:02.640
except all the death and suffering.
link |
00:08:06.840
Do you think there's something positive
link |
00:08:09.840
that came from the war,
link |
00:08:12.480
the mirror that it put to our society,
link |
00:08:15.480
sort of the ripple effects on it, ethically speaking,
link |
00:08:18.840
do you think there are positive aspects to war?
link |
00:08:22.640
I find it hard to see positive aspects in war
link |
00:08:26.520
and some of the things that other people think of
link |
00:08:29.280
as positive and beautiful may be questioning.
link |
00:08:34.280
So there's a certain kind of patriotism.
link |
00:08:37.040
People say, you know, during wartime, we all pull together,
link |
00:08:39.680
we all work together against the common enemy.
link |
00:08:42.480
And that's true.
link |
00:08:43.920
An outside enemy does unite a country
link |
00:08:46.040
and in general, it's good for countries to be united
link |
00:08:48.600
and have common purposes.
link |
00:08:49.800
But it also engenders a kind of a nationalism
link |
00:08:54.000
and a patriotism that can't be questioned
link |
00:08:56.800
and that I'm more skeptical about.
link |
00:09:00.480
What about the brotherhood that people talk about
link |
00:09:04.440
from soldiers, the sort of counterintuitive sad idea
link |
00:09:11.440
that the closest that people feel to each other
link |
00:09:15.000
is in those moments of suffering,
link |
00:09:16.600
of being at the sort of the edge
link |
00:09:18.560
of seeing your comrades dying in your arms.
link |
00:09:23.400
That somehow brings people extremely closely together.
link |
00:09:25.840
Suffering brings people closer together.
link |
00:09:28.000
How do you make sense of that?
link |
00:09:30.200
It may bring people close together,
link |
00:09:31.800
but there are other ways of bonding
link |
00:09:34.800
and being close to people, I think,
link |
00:09:36.160
without the suffering and death that war entails.
link |
00:09:40.200
Perhaps you could see,
link |
00:09:42.000
you could already hear the romanticized Russian enemy.
link |
00:09:46.000
We tend to romanticize suffering just a little bit
link |
00:09:49.000
in our literature and culture and so on.
link |
00:09:52.000
Could you take a step back?
link |
00:09:54.000
I apologize if it's a ridiculous question,
link |
00:09:56.000
but what is suffering?
link |
00:09:58.000
If you would try to define what suffering is,
link |
00:10:02.000
how would you go about it?
link |
00:10:04.000
Suffering is a conscious state.
link |
00:10:08.000
There can be no suffering for a being
link |
00:10:10.000
who is completely unconscious.
link |
00:10:13.000
And it's distinguished from other conscious states
link |
00:10:18.000
in terms of being one that,
link |
00:10:20.000
considered just in itself,
link |
00:10:23.000
we would rather be without.
link |
00:10:25.000
It's a conscious state that we want to stop
link |
00:10:27.000
if we're experiencing
link |
00:10:29.000
or we want to avoid having again
link |
00:10:31.000
if we've experienced it in the past.
link |
00:10:34.000
And that's, I emphasize, for its own sake,
link |
00:10:37.000
because, of course, people will say,
link |
00:10:39.000
well, suffering strengthens the spirit.
link |
00:10:41.000
It has good consequences.
link |
00:10:44.000
And sometimes it does have those consequences.
link |
00:10:47.000
And of course, sometimes we might undergo suffering.
link |
00:10:50.000
We set ourselves a challenge to run a marathon
link |
00:10:53.000
or climb a mountain,
link |
00:10:55.000
or even just to go to the dentist
link |
00:10:57.000
so that the toothache doesn't get worse,
link |
00:10:59.000
even though we know the dentist is going to hurt us
link |
00:11:01.000
to some extent.
link |
00:11:02.000
So I'm not saying that we never choose suffering,
link |
00:11:04.000
but I am saying that other things being equal,
link |
00:11:07.000
we would rather not be in that state of consciousness.
link |
00:11:10.000
Is the ultimate goal, sort of,
link |
00:11:12.000
you have the new 10 year anniversary release
link |
00:11:15.000
of the Life You Can Save Book,
link |
00:11:17.000
really influential book.
link |
00:11:19.000
We'll talk about it a bunch of times
link |
00:11:21.000
throughout this conversation.
link |
00:11:22.000
But do you think it's possible
link |
00:11:25.000
to eradicate suffering?
link |
00:11:28.000
Or is that the goal?
link |
00:11:30.000
Or do we want to achieve
link |
00:11:33.000
a kind of minimum threshold of suffering
link |
00:11:37.000
and then keeping a little drop of poison
link |
00:11:42.000
to keep things interesting in the world?
link |
00:11:47.000
In practice, I don't think we ever will eliminate suffering.
link |
00:11:50.000
So I think that little drop of poison, as you put it,
link |
00:11:53.000
or if you like, the contrasting dash
link |
00:11:56.000
of an unpleasant color, perhaps something like that,
link |
00:11:59.000
in an otherwise harmonious and beautiful composition,
link |
00:12:03.000
that is going to always be there.
link |
00:12:06.000
If you ask me whether, in theory,
link |
00:12:09.000
if we could get rid of it, we should.
link |
00:12:12.000
I think the answer is whether, in fact,
link |
00:12:15.000
we would be better off,
link |
00:12:18.000
or whether in terms of, by eliminating the suffering,
link |
00:12:20.000
we would also eliminate some of the highs,
link |
00:12:22.000
the positive highs.
link |
00:12:24.000
And if that's so, then we might be prepared to say
link |
00:12:27.000
it's worth having a minimum of suffering
link |
00:12:30.000
in order to have the best possible experiences as well.
link |
00:12:34.000
Is there a relative aspect to suffering?
link |
00:12:38.000
When you talk about eradicating poverty in the world,
link |
00:12:44.000
is this the more you succeed,
link |
00:12:47.000
the more the bar of what defines poverty raises,
link |
00:12:50.000
or is there, at the basic human ethical level,
link |
00:12:53.000
a bar that's absolute, that once you get above it,
link |
00:12:57.000
then we can morally converge to feeling
link |
00:13:02.000
like we have eradicated poverty?
link |
00:13:06.000
I think they're both,
link |
00:13:08.000
and I think this is true for poverty as well as suffering.
link |
00:13:11.000
There's an objective level of suffering,
link |
00:13:15.000
or of poverty, where we're talking about objective indicators,
link |
00:13:19.000
like you're constantly hungry,
link |
00:13:23.000
you can't get enough food,
link |
00:13:26.000
you're constantly cold, you can't get warm,
link |
00:13:30.000
you have some physical pains that you're never rid of.
link |
00:13:34.000
I think those things are objective.
link |
00:13:37.000
But it may also be true that if you do get rid of that
link |
00:13:40.000
and you get to the stage where all of those basic needs
link |
00:13:43.000
have been met,
link |
00:13:45.000
there may still be then new forms of suffering that develop.
link |
00:13:49.000
And perhaps that's what we're seeing in the affluent societies we have,
link |
00:13:53.000
that people get bored, for example.
link |
00:13:56.000
They don't need to spend so many hours a day
link |
00:13:58.000
earning money to get enough to eat and shelter.
link |
00:14:01.000
So now they're bored, they lack a sense of purpose.
link |
00:14:04.000
That can happen.
link |
00:14:06.000
And that then is a kind of a relative suffering
link |
00:14:10.000
that is distinct from the objective forms of suffering.
link |
00:14:14.000
But in your focus on eradicating suffering,
link |
00:14:17.000
you don't think about that kind of...
link |
00:14:19.000
the kind of interesting challenges and suffering
link |
00:14:22.000
that emerges in affluent societies.
link |
00:14:24.000
That's just not...in your ethical, philosophical brain,
link |
00:14:28.000
is that of interest at all?
link |
00:14:31.000
It would be of interest to me if we had eliminated
link |
00:14:34.000
all of the objective forms of suffering,
link |
00:14:36.000
which I think are generally more severe
link |
00:14:40.000
and also perhaps easier at this stage anyway to know how to eliminate.
link |
00:14:45.000
So, yes, in some future state,
link |
00:14:48.000
when we've eliminated those objective forms of suffering,
link |
00:14:50.000
I would be interested in trying to eliminate
link |
00:14:53.000
the relative forms as well.
link |
00:14:56.000
But that's not a practical need for me at the moment.
link |
00:15:00.000
Sorry to linger on it because you kind of said it,
link |
00:15:02.000
but just to...
link |
00:15:05.000
Is elimination the goal for the affluent society?
link |
00:15:08.000
So, is there a...
link |
00:15:11.000
Do you see a suffering as a creative force?
link |
00:15:14.000
Suffering can be a creative force.
link |
00:15:17.000
I think I'm repeating what I said about the highs
link |
00:15:20.000
and whether we need some of the lows to experience the highs.
link |
00:15:24.000
So, it may be that suffering makes us more creative
link |
00:15:26.000
and we regard that as worthwhile.
link |
00:15:29.000
Maybe that brings some of those highs with it
link |
00:15:32.000
that we would not have had if we'd had no suffering.
link |
00:15:36.000
I don't really know.
link |
00:15:38.000
Many people have suggested that
link |
00:15:40.000
and I certainly can't have no basis for denying it.
link |
00:15:44.000
And if it's true,
link |
00:15:46.000
I would not want to eliminate suffering completely.
link |
00:15:50.000
But the focus is on the absolute,
link |
00:15:53.000
not to be cold, not to be hungry.
link |
00:15:56.000
Yes.
link |
00:15:58.000
At the present stage of where the world's population is,
link |
00:16:01.000
that's the focus.
link |
00:16:03.000
Talking about human nature for a second,
link |
00:16:06.000
do you think people are inherently good
link |
00:16:08.000
or do we all have good and evil in us
link |
00:16:11.000
that basically everyone is capable of evil
link |
00:16:14.000
based on the environment?
link |
00:16:17.000
Certainly most of us have potential for both good and evil.
link |
00:16:21.000
I'm not prepared to say that everyone is capable of evil.
link |
00:16:24.000
Maybe some people who even in the worst of circumstances
link |
00:16:27.000
would not be capable of it.
link |
00:16:29.000
But most of us are very susceptible
link |
00:16:32.000
to environmental influences.
link |
00:16:34.000
So, when we look at things that we were talking about previously,
link |
00:16:38.000
let's say, what the Nazis did during the Holocaust,
link |
00:16:43.000
I think it's quite difficult to say,
link |
00:16:46.000
I know that I would not have done those things,
link |
00:16:50.000
even if I were in the same circumstances as those who did them.
link |
00:16:54.000
Even if, let's say, I had grown up under the Nazi regime
link |
00:16:58.000
and had been indoctrinated with racist ideas,
link |
00:17:02.000
had also had the idea that I must obey orders,
link |
00:17:07.000
follow the commands of the Fuhrer.
link |
00:17:10.000
Plus, of course, perhaps the threat that if I didn't do certain things,
link |
00:17:14.000
I might get sent to the Russian front,
link |
00:17:16.000
and that would be a pretty grim fate.
link |
00:17:19.000
I think it's really hard for anybody to say,
link |
00:17:22.000
nevertheless, I know I would not have killed those Jews or whatever else it was.
link |
00:17:28.000
What's your intuition? How many people will be able to say that?
link |
00:17:32.000
Truly to be able to say it.
link |
00:17:34.000
I think very few, less than 10%.
link |
00:17:37.000
To me, it seems a very interesting and powerful thing to meditate on.
link |
00:17:41.000
So I've read a lot about the war, the World War II,
link |
00:17:45.000
and I can't escape the thought that I would have not been one of the 10%.
link |
00:17:51.000
Right. I have to say, I simply don't know.
link |
00:17:55.000
I would like to hope that I would have been one of the 10%,
link |
00:17:59.000
but I don't really have any basis for claiming that I would have been different from the majority.
link |
00:18:05.000
Is it a worthwhile thing to contemplate?
link |
00:18:09.000
It would be interesting if we could find a way of really finding these answers.
link |
00:18:13.000
There obviously is quite a bit of research on people during the Holocaust,
link |
00:18:19.000
on how ordinary Germans got led to do terrible things,
link |
00:18:25.000
and there are also studies of the resistance.
link |
00:18:28.000
Some heroic people in the White Rose group, for example,
link |
00:18:32.000
who resisted even though they knew they were likely to die for it.
link |
00:18:37.000
But I don't know whether these studies really can answer your larger question
link |
00:18:43.000
of how many people would have been capable of doing that.
link |
00:18:47.000
Well, the reason I think it's interesting is in the world,
link |
00:18:52.000
as you described, when there are things that you'd like to do that are good,
link |
00:19:00.000
that are objectively good,
link |
00:19:02.000
it's useful to think about whether I'm not willing to do something,
link |
00:19:07.000
or I'm not willing to acknowledge something as good and the right thing to do
link |
00:19:11.000
because I'm simply scared of damaging my life in some kind of way.
link |
00:19:19.000
And that kind of thought exercise is helpful to understand
link |
00:19:22.000
what is the right thing in my current skill set and the capacity to do.
link |
00:19:27.000
There are things that are convenient,
link |
00:19:30.000
and I wonder if there are things that are highly inconvenient,
link |
00:19:33.000
where I would have to experience derision, or hatred, or death,
link |
00:19:38.000
or all those kinds of things, but it's truly the right thing to do.
link |
00:19:41.000
And that kind of balance is, I feel like in America,
link |
00:19:45.000
it's difficult to think in the current times,
link |
00:19:50.000
it seems easier to put yourself back in history,
link |
00:19:53.000
where you can sort of objectively contemplate whether,
link |
00:19:57.000
how willing you are to do the right thing when the cost is high.
link |
00:20:03.000
True, but I think we do face those challenges today,
link |
00:20:06.000
and I think we can still ask ourselves those questions.
link |
00:20:10.000
So one stand that I took more than 40 years ago now was to stop eating meat
link |
00:20:15.000
and become a vegetarian at a time when you hardly met anybody who was a vegetarian,
link |
00:20:21.000
or if you did, they might have been a Hindu,
link |
00:20:24.000
or they might have had some weird theories about meat and health.
link |
00:20:30.000
And I know thinking about making that decision,
link |
00:20:33.000
I was convinced that it was the right thing to do,
link |
00:20:35.000
but I still did have to think,
link |
00:20:37.000
are all my friends going to think that I'm a crank,
link |
00:20:40.000
because I'm now refusing to eat meat?
link |
00:20:44.000
So I'm not saying there were any terrible sanctions, obviously,
link |
00:20:48.000
but I thought about that, and I guess I decided,
link |
00:20:51.000
well, I still think this is the right thing to do,
link |
00:20:54.000
and I'll put up with that if it happens.
link |
00:20:56.000
And one or two friends were clearly uncomfortable with that decision,
link |
00:21:00.000
but that was pretty minor compared to the historical examples that we've been talking about.
link |
00:21:07.000
But other issues that we have around too, like global poverty
link |
00:21:12.000
and what we ought to be doing about that is another question
link |
00:21:15.000
where people, I think, can have the opportunity to take a stand
link |
00:21:19.000
on what's the right thing to do now.
link |
00:21:21.000
Climate change would be a third question
link |
00:21:23.000
where, again, people are taking a stand.
link |
00:21:26.000
I can look at Greta Thunberg there and say,
link |
00:21:29.000
well, I think it must have taken a lot of courage for a schoolgirl
link |
00:21:34.000
to say, I'm going to go on strike about climate change
link |
00:21:37.000
and see what happened.
link |
00:21:41.000
Yeah, especially in this divisive world,
link |
00:21:43.000
she gets exceptionally huge amounts of support and hatred both.
link |
00:21:47.000
Which is very difficult for a teenager to operate in.
link |
00:21:54.000
In your book, Ethics in the Real World,
link |
00:21:56.000
an amazing book, people should check it out.
link |
00:21:58.000
Very easy read.
link |
00:22:00.000
82 brief essays on things that matter.
link |
00:22:03.000
One of the essays asks, should robots have rights?
link |
00:22:07.000
You've written about this, so let me ask, should robots have rights?
link |
00:22:11.000
If we ever develop robots capable of consciousness,
link |
00:22:16.000
capable of having their own internal perspective
link |
00:22:20.000
on what's happening to them so that their lives can go well
link |
00:22:24.000
or badly for them, then robots should have rights.
link |
00:22:27.000
Until that happens, they shouldn't.
link |
00:22:30.000
So, is consciousness essentially a prerequisite to suffering?
link |
00:22:36.000
So, everything that possesses consciousness
link |
00:22:41.000
is capable of suffering put another way.
link |
00:22:44.000
And if so, what is consciousness?
link |
00:22:48.000
I certainly think that consciousness is a prerequisite for suffering.
link |
00:22:53.000
You can't suffer if you're not conscious.
link |
00:22:58.000
But is it true that every being that is conscious
link |
00:23:02.000
will suffer or has to be capable of suffering?
link |
00:23:05.000
I suppose you could imagine a kind of consciousness,
link |
00:23:08.000
especially if we can construct it artificially,
link |
00:23:11.000
that's capable of experiencing pleasure,
link |
00:23:14.000
but just automatically cuts out the consciousness
link |
00:23:17.000
when they're suffering, sort of like an instant anesthesia
link |
00:23:20.000
as soon as something is going to cause you suffering.
link |
00:23:22.000
So, that's possible, but doesn't exist
link |
00:23:26.000
as far as we know on this planet yet.
link |
00:23:31.000
You asked what is consciousness?
link |
00:23:34.000
Philosophers often talk about it as there being a subject of experiences.
link |
00:23:39.000
So, you and I and everybody listening to this is a subject of experience.
link |
00:23:44.000
There is a conscious subject who is taking things in,
link |
00:23:48.000
responding to it in various ways,
link |
00:23:51.000
feeling good about it, feeling bad about it.
link |
00:23:54.000
And that's different from the kinds of artificial intelligence we have now.
link |
00:24:00.000
I take out my phone, I ask Google directions to where I'm going,
link |
00:24:06.000
Google gives me the directions, and I choose to take a different way.
link |
00:24:10.000
Google doesn't care. It's not like I'm offending Google or anything like that.
link |
00:24:14.000
There is no subject of experiences there.
link |
00:24:16.000
And I think that's the indication that Google AI we have now
link |
00:24:23.000
is not conscious or at least that level of AI is not conscious.
link |
00:24:27.000
And that's the way to think about it.
link |
00:24:29.000
It may be difficult to tell, of course, whether a certain AI is or isn't conscious.
link |
00:24:34.000
It may mimic consciousness, and we can't tell if it's only mimicking it
link |
00:24:37.000
or if it's the real thing.
link |
00:24:39.000
But that's what we're looking for.
link |
00:24:41.000
Is there a subject of experience, a perspective on the world
link |
00:24:45.000
from which things can go well or badly from that perspective?
link |
00:24:50.000
So, our idea of what suffering looks like
link |
00:24:54.000
comes from just watching ourselves when we're in pain.
link |
00:25:01.000
Or when we're experiencing pleasure. It's not only...
link |
00:25:03.000
Pleasure and pain.
link |
00:25:05.000
And then you could actually push back on this,
link |
00:25:09.000
but I would say that's how we kind of build an intuition about animals
link |
00:25:14.000
is we can infer the similarities between humans and animals
link |
00:25:18.000
and so infer that they're suffering or not based on certain things
link |
00:25:22.000
and they're conscious or not.
link |
00:25:24.000
So, what if robots...
link |
00:25:28.000
You mentioned Google Maps.
link |
00:25:30.000
And I've done this experiment, so I work in robotics just for my own self.
link |
00:25:35.000
I have several Roomba robots
link |
00:25:37.000
and I play with different speech interaction, voice based interaction.
link |
00:25:42.000
And if the Roomba or the robot or Google Maps
link |
00:25:46.000
shows any signs of pain, like screaming or moaning
link |
00:25:50.000
or being displeased by something you've done,
link |
00:25:54.000
that, in my mind, I can't help but immediately upgrade it.
link |
00:25:59.000
And even when I myself programmed it in,
link |
00:26:02.000
just having another entity that's now, for the moment, disjoint from me,
link |
00:26:07.000
showing signs of pain, makes me feel like it is conscious.
link |
00:26:11.000
Like, I immediately...
link |
00:26:13.000
Whatever, I immediately realize that it's not, obviously,
link |
00:26:18.000
that feeling is there.
link |
00:26:20.000
So, sort of, I guess...
link |
00:26:23.000
I guess, what do you think about a world
link |
00:26:26.000
where Google Maps and Roombas are pretending to be conscious
link |
00:26:32.000
and the descendants of apes are not smart enough to realize they're not
link |
00:26:37.000
or whatever, or that is conscious, they appear to be conscious
link |
00:26:41.000
and so you then have to give them rights.
link |
00:26:44.000
The reason I'm asking that is that kind of capability may be closer than we realize.
link |
00:26:52.000
Yes, that kind of capability may be closer,
link |
00:26:58.000
but I don't think it follows that we have to give them rights.
link |
00:27:01.000
I suppose the argument for saying that in those circumstances
link |
00:27:05.000
we should give them rights is that if we don't,
link |
00:27:08.000
we'll harden ourselves against other beings who are not robots
link |
00:27:13.000
and who really do suffer.
link |
00:27:15.000
That's a possibility that, you know,
link |
00:27:18.000
if we get used to looking at it being suffering
link |
00:27:21.000
and saying, yeah, we don't have to do anything about that,
link |
00:27:23.000
that being doesn't have any rights,
link |
00:27:25.000
maybe we'll feel the same about animals, for instance.
link |
00:27:29.000
And interestingly, among philosophers and thinkers
link |
00:27:35.000
who denied that we have any direct duties to animals,
link |
00:27:40.000
and this includes people like Thomas Aquinas and Immanuel Kant,
link |
00:27:44.000
they did say, yes, but still it's better not to be cruel to them,
link |
00:27:49.000
not because of the suffering we're inflicting on the animals,
link |
00:27:52.000
but because if we are, we may develop a cruel disposition
link |
00:27:57.000
and this will be bad for humans, you know,
link |
00:28:00.000
because we're more likely to be cruel to other humans
link |
00:28:02.000
and that would be wrong.
link |
00:28:05.000
But you don't accept that kind of...
link |
00:28:07.000
I don't accept that as the basis of the argument
link |
00:28:10.000
for why we shouldn't be cruel to animals.
link |
00:28:12.000
I think the basis of the argument for why we shouldn't be cruel to animals
link |
00:28:14.000
is just that we're inflicting suffering on them
link |
00:28:16.000
and the suffering is a bad thing.
link |
00:28:18.000
But possibly I might accept some sort of parallel of that argument
link |
00:28:23.000
as a reason why you shouldn't be cruel to these robots
link |
00:28:28.000
that mimic the symptoms of pain
link |
00:28:30.000
if it's going to be harder for us to distinguish.
link |
00:28:33.000
I would venture to say, I'd like to disagree with you
link |
00:28:36.000
and with most people, I think.
link |
00:28:39.000
At the risk of sounding crazy,
link |
00:28:42.000
I would like to say that if that Roomba is dedicated
link |
00:28:47.000
to faking the consciousness and the suffering,
link |
00:28:50.000
I think it would be impossible for us...
link |
00:28:55.000
I would like to apply the same argument as with animals to robots
link |
00:29:00.000
that they deserve rights in that sense.
link |
00:29:02.000
Now, we might outlaw the addition of those kinds of features into Roombas,
link |
00:29:07.000
but once you do, I think...
link |
00:29:11.000
I'm quite surprised by the upgrade in consciousness
link |
00:29:16.000
that the display of suffering creates.
link |
00:29:20.000
It's a totally open world,
link |
00:29:22.000
but I'd like to just serve the difference between animals and other humans
link |
00:29:27.000
is that in the robot case, we've added it in ourselves.
link |
00:29:32.000
Therefore, we can say something about how real it is.
link |
00:29:37.000
But I would like to say that the display of it is what makes it real.
link |
00:29:41.000
And I'm not a philosopher, I'm not making that argument,
link |
00:29:45.000
but I'd at least like to add that as a possibility.
link |
00:29:48.000
And I've been surprised by it.
link |
00:29:51.000
It's all I'm trying to sort of articulate poorly, I suppose.
link |
00:29:55.000
So, there is a philosophical view has been held about humans,
link |
00:30:00.000
which is rather like what you're talking about, and that's behaviorism.
link |
00:30:04.000
So, behaviorism was employed both in psychology,
link |
00:30:07.000
people like B.F. Skinner was a famous behaviorist,
link |
00:30:10.000
but in psychology, it was more a kind of a,
link |
00:30:14.000
what is it that makes this science where you need to have behavior
link |
00:30:17.000
because that's what you can observe, you can't observe consciousness.
link |
00:30:20.000
But in philosophy, the view is defended by people like Gilbert Ryle,
link |
00:30:24.000
who was a professor of philosophy at Oxford,
link |
00:30:26.000
wrote a book called The Concept of Mind,
link |
00:30:29.000
in which, you know, in this kind of phase,
link |
00:30:32.000
this is in the 40s of linguistic philosophy,
link |
00:30:35.000
he said, well, the meaning of a term is its use,
link |
00:30:39.000
and we use terms like so and so is in pain
link |
00:30:42.000
when we see somebody writhing or screaming
link |
00:30:45.000
or trying to escape some stimulus,
link |
00:30:47.000
and that's the meaning of the term.
link |
00:30:49.000
So, that's what it is to be in pain, and you point to the behavior.
link |
00:30:53.000
And Norman Malcolm, who was another philosopher in the school from Cornell,
link |
00:31:00.000
had the view that, you know, so, what is it to dream?
link |
00:31:04.000
After all, we can't see other people's dreams.
link |
00:31:07.000
Well, when people wake up and say,
link |
00:31:10.000
I've just had a dream of, you know, here I was,
link |
00:31:13.000
undressed, walking down the main street or whatever it is you've dreamt,
link |
00:31:17.000
that's what it is to have a dream,
link |
00:31:19.000
it's basically to wake up and recall something.
link |
00:31:22.000
So, you could apply this to what you're talking about and say,
link |
00:31:27.000
so, what it is to be in pain is to exhibit these symptoms of pain behavior,
link |
00:31:31.000
and therefore, these robots are in pain, that's what the word means.
link |
00:31:36.000
But nowadays, not many people think that
link |
00:31:39.000
Riles kind of philosophical behaviorism is really very plausible.
link |
00:31:42.000
So, I think they would say the same about your view.
link |
00:31:45.000
So, yes, I just spoke with Noam Chomsky,
link |
00:31:48.000
who basically was part of dismantling the behaviorist movement.
link |
00:31:54.000
But, and I'm with that 100% for studying human behavior,
link |
00:32:00.000
but I am one of the few people in the world
link |
00:32:03.000
who has made rumbas scream in pain,
link |
00:32:08.000
and I just don't know what to do with that empirical evidence,
link |
00:32:14.000
because it's hard, sort of philosophically I agree,
link |
00:32:19.000
but the only reason I philosophically agree in that case
link |
00:32:23.000
is because I was the programmer,
link |
00:32:25.000
but if somebody else was a programmer,
link |
00:32:27.000
I'm not sure I would be able to interpret that well.
link |
00:32:29.000
So, I think it's a new world that I was just curious what your thoughts are.
link |
00:32:37.000
For now, you feel that the display of what we can kind of intellectually say
link |
00:32:46.000
is a fake display of suffering is not suffering.
link |
00:32:50.000
That's right. That would be my view.
link |
00:32:53.000
But that's consistent, of course, with the idea that it's part of our nature
link |
00:32:57.000
to respond to this display if it's reasonably authentically done.
link |
00:33:01.000
And therefore, it's understandable that people would feel this
link |
00:33:06.000
and maybe, as I said, it's even a good thing that they do feel it,
link |
00:33:11.000
and you wouldn't want to harden yourself against it
link |
00:33:13.000
because then you might harden yourself against beings who are really suffering.
link |
00:33:17.000
But there's this line, you know, so you said,
link |
00:33:20.000
once an artificial journal intelligence system,
link |
00:33:23.000
a human level intelligence system, become conscious,
link |
00:33:26.000
I guess if I could just linger on it.
link |
00:33:28.000
Now, I've wrote really dumb programs that just say things that I told them to say,
link |
00:33:34.000
but how do you know when a system like Alexa, which is officially complex,
link |
00:33:40.000
that you can't introspect of how it works,
link |
00:33:42.000
starts giving you signs of consciousness through natural language.
link |
00:33:48.000
There's a feeling there's another entity there that's self aware,
link |
00:33:52.000
that has a fear of death, a mortality,
link |
00:33:55.000
that has awareness of itself that we kind of associate with other living creatures.
link |
00:34:03.000
I guess I'm sort of trying to do the slippery slope from the very naive thing
link |
00:34:07.000
where I started into something where it's sufficiently a black box
link |
00:34:12.000
to where it's starting to feel like it's conscious.
link |
00:34:16.000
Where's that threshold where you would start getting uncomfortable
link |
00:34:20.000
with the idea of robot suffering, do you think?
link |
00:34:25.000
I don't know enough about the programming that we're going to this really
link |
00:34:29.000
to answer this question.
link |
00:34:31.000
But I presume that somebody who does know more about this could look at the program
link |
00:34:37.000
and see whether we can explain the behaviors in a parsimonious way
link |
00:34:43.000
that doesn't require us to suggest that some sort of consciousness has emerged
link |
00:34:49.000
or alternatively whether you're in a situation where you say,
link |
00:34:53.000
I don't know how this is happening.
link |
00:34:56.000
The program does generate a kind of artificial general intelligence
link |
00:35:01.000
which starts to do things itself and is autonomous of the basic programming
link |
00:35:08.000
that's set it up.
link |
00:35:10.000
And so it's quite possible that actually we have achieved consciousness
link |
00:35:15.000
in a system of artificial intelligence.
link |
00:35:18.000
The approach that I work with most of the community is really excited about now
link |
00:35:22.000
is with learning methods, so machine learning.
link |
00:35:26.000
And the learning methods unfortunately are not capable of revealing
link |
00:35:31.000
which is why somebody like Noam Chomsky criticizes them.
link |
00:35:34.000
You've created powerful systems that are able to do certain things
link |
00:35:37.000
without understanding the theory, the physics, the science of how it works.
link |
00:35:42.000
And so it's possible if those are the kinds of methods that succeed
link |
00:35:46.000
we won't be able to know exactly, sort of try to reduce,
link |
00:35:52.000
try to find whether this thing is conscious or not,
link |
00:35:56.000
this thing is intelligent or not.
link |
00:35:58.000
It's simply giving, when we talk to it it displays wit and humor
link |
00:36:04.000
and cleverness and emotion and fear
link |
00:36:09.000
and then we won't be able to say where in the billions of nodes,
link |
00:36:14.000
neurons in this artificial neural network is the fear coming from.
link |
00:36:20.000
So in that case that's a really interesting place where we do now start
link |
00:36:24.000
to return to behaviorism and say...
link |
00:36:28.000
Yeah, that is an interesting issue.
link |
00:36:34.000
I would say that if we have serious doubts and think it might be conscious
link |
00:36:39.000
then we ought to try to give it the benefit of the doubt.
link |
00:36:43.000
Just as I would say with animals, I think we can be highly confident
link |
00:36:47.000
that vertebrates are conscious but when we get down
link |
00:36:52.000
and some invertebrates like the octopus but with insects
link |
00:36:57.000
it's much harder to be confident of that.
link |
00:37:01.000
I think we should give them the benefit of the doubt where we can
link |
00:37:04.000
which means I think it would be wrong to torture an insect
link |
00:37:09.000
but doesn't necessarily mean it's wrong to slap a mosquito
link |
00:37:13.000
that's about to bite you and stop you getting to sleep.
link |
00:37:16.000
So I think you try to achieve some balance in these circumstances of uncertainty.
link |
00:37:22.000
If it's okay with you, if you can go back just briefly.
link |
00:37:26.000
So 44 years ago, like you mentioned, 40 plus years ago
link |
00:37:29.000
you've written Animal Liberation, the classic book that started
link |
00:37:33.000
that was a foundation of the movement of Animal Liberation.
link |
00:37:39.000
Can you summarize the key set of ideas that are underpinning that book?
link |
00:37:44.000
Certainly, the key idea that underlies that book is the concept of speciesism
link |
00:37:52.000
which I did not invent that term.
link |
00:37:55.000
I took it from a man called Richard Ryder who was in Oxford when I was
link |
00:37:59.000
a pamphlet that he'd written about experiments on chimpanzees that used that term.
link |
00:38:05.000
But I think I contributed to making it philosophically more precise
link |
00:38:09.000
and to getting it into a broader audience.
link |
00:38:12.000
And the idea is that we have a bias or a prejudice
link |
00:38:17.000
against taking seriously the interests of beings who are not members of our species.
link |
00:38:23.000
Just as in the past, Europeans, for example,
link |
00:38:27.000
have had a bias against taking seriously the interests of Africans, racism.
link |
00:38:31.000
And men have had a bias against taking seriously the interests of women, sexism.
link |
00:38:37.000
So I think something analogous, not completely identical,
link |
00:38:41.000
but something analogous, goes on and has gone on for a very long time
link |
00:38:46.000
with the way humans see themselves vis a vis animals.
link |
00:38:50.000
We see ourselves as more important.
link |
00:38:54.000
We see animals as existing to serve our needs in various ways.
link |
00:38:59.000
And you can find this very explicit in earlier philosophers
link |
00:39:03.000
from Aristotle through to Kant and others.
link |
00:39:06.000
And either we don't need to take their interests into account at all
link |
00:39:13.000
or we can discount it because they're not humans.
link |
00:39:18.000
They can't a little bit, but they don't count nearly as much as humans do.
link |
00:39:23.000
My book argues that that attitude is responsible for a lot of the things
link |
00:39:28.000
that we do to animals that are wrong, confining them indoors
link |
00:39:32.000
in very crowded cramped conditions in factory farms
link |
00:39:36.000
to produce meat or eggs or milk more cheaply,
link |
00:39:39.000
using them in some research that's by no means essential for our survival or well being
link |
00:39:47.000
and a whole lot, you know, some of the sports and things that we do to animals.
link |
00:39:52.000
So I think that's unjustified because I think the significance of pain and suffering
link |
00:40:01.000
does not depend on the species of the being who is in pain or suffering anymore
link |
00:40:05.000
than it depends on the race or sex of the being who is in pain or suffering.
link |
00:40:10.000
And I think we ought to rethink our treatment of animals along the lines of saying
link |
00:40:16.000
if the pain is just as great in animal, then it's just as bad that it happens as if it were a human.
link |
00:40:24.000
Maybe if I could ask, I apologize, hopefully it's not a ridiculous question,
link |
00:40:29.000
but so as far as we know, we cannot communicate with animals through natural language,
link |
00:40:36.000
but we would be able to communicate with robots.
link |
00:40:40.000
So returning to sort of a small parallel between perhaps animals in the future of AI,
link |
00:40:46.000
if we do create an AGI system or as we approach creating that AGI system,
link |
00:40:53.000
what kind of questions would you ask her to try to intuit whether there is consciousness
link |
00:41:05.000
or, more importantly, whether there's capacity to suffer?
link |
00:41:12.000
I might ask the AGI what she was feeling.
link |
00:41:18.000
Well, does she have feelings?
link |
00:41:20.000
And if she says yes to describe those feelings, to describe what they were like,
link |
00:41:25.000
to see what the phenomenal account of consciousness is like, that's one question.
link |
00:41:33.000
I might also try to find out if the AGI has a sense of itself.
link |
00:41:41.000
So for example, the idea, we often ask people,
link |
00:41:46.000
suppose you're in a car accident and your brain were transplanted into someone else's body,
link |
00:41:51.000
do you think you would survive or would it be the person whose body was still surviving,
link |
00:41:56.000
your body having been destroyed?
link |
00:41:58.000
And most people say, I think if my brain was transplanted along with my memories and so on, I would survive.
link |
00:42:04.000
So we could ask AGI those kinds of questions.
link |
00:42:08.000
If they were transferred to a different piece of hardware, would they survive?
link |
00:42:13.000
What would survive?
link |
00:42:15.000
Sort of on that line, another perhaps absurd question,
link |
00:42:19.000
but do you think having a body is necessary for consciousness?
link |
00:42:25.000
So do you think digital beings can suffer?
link |
00:42:31.000
Presumably digital beings need to be running on some kind of hardware, right?
link |
00:42:37.000
Yes, it ultimately boils down to, but this is exactly what you just said, is moving the brain.
link |
00:42:42.000
So you could move it to a different kind of hardware, you know, and they could say, look,
link |
00:42:46.000
your hardware is getting worn out, we're going to transfer you to a fresh piece of hardware,
link |
00:42:52.000
so we're going to shut you down for a time.
link |
00:42:55.000
But don't worry, you know, you'll be running very soon on a nice fresh piece of hardware.
link |
00:43:00.000
And you could imagine this conscious AGI saying, that's fine, I don't mind having a little rest.
link |
00:43:05.000
Just make sure you don't lose me or something like that.
link |
00:43:08.000
Yeah, I mean, that's an interesting thought that even with us humans, the suffering is in the software.
link |
00:43:15.000
We right now don't know how to repair the hardware.
link |
00:43:19.000
But we're getting better at it and better in the idea.
link |
00:43:23.000
I mean, a lot of some people dream about one day being able to transfer certain aspects of the software to another piece of hardware.
link |
00:43:33.000
What do you think?
link |
00:43:34.000
Just on that topic, there's been a lot of exciting innovation in brain computer interfaces.
link |
00:43:42.000
I don't know if you're familiar with the companies like Neuralink with Elon Musk,
link |
00:43:46.000
communicating both ways from a computer, being able to send, activate neurons,
link |
00:43:51.000
and being able to read spikes from neurons with the dream of being able to expand,
link |
00:43:58.000
instead of increase the bandwidth at which your brain can like look up articles on Wikipedia kind of thing,
link |
00:44:05.000
to expand the knowledge capacity of the brain.
link |
00:44:08.000
Do you think that notion is that interesting to you as the expansion of the human mind?
link |
00:44:15.000
Yes, that's very interesting.
link |
00:44:17.000
I'd love to be able to have that increased bandwidth.
link |
00:44:20.000
And I, you know, I want better access to my memory, I have to say too.
link |
00:44:24.000
As a yet older, you know, I talked to my wife about things that we did 20 years ago or something.
link |
00:44:30.000
Her memory is often better about particular events.
link |
00:44:33.000
Where were we? Who was at that event?
link |
00:44:35.000
What did he or she wear even?
link |
00:44:37.000
She may know and I have not the faintest idea about this, but perhaps it's somewhere in my memory.
link |
00:44:41.000
And if I had this extended memory, I could, I could search that particular year and re run those things.
link |
00:44:47.000
I think that would be great.
link |
00:44:49.000
In some sense, we already have that by storing so much of our data online, like pictures of different events.
link |
00:44:55.000
Yes. Well, Gmail is fantastic for that because, you know, people, people email me as if they know me well.
link |
00:45:00.000
And I haven't got a clue who they are, but then I searched for their name.
link |
00:45:03.000
I emailed me in 2007 and I know who they are now.
link |
00:45:07.000
Yeah, so we're already taking the first steps already.
link |
00:45:11.000
So on the flip side of AI, people like Stuart Russell and others focus on the control problem, value alignment in AI,
link |
00:45:19.000
which is the problem of making sure we build systems that align to our own values, our ethics.
link |
00:45:25.000
Do you think sort of high level, how do we go about building systems?
link |
00:45:31.000
Do you think is it possible that align with our values, align with our human ethics?
link |
00:45:36.000
Or living being ethics?
link |
00:45:39.000
Presumably, it's possible to do that.
link |
00:45:43.000
I know that a lot of people who think that there's a real danger that we won't, that we'll more or less accidentally lose control of AI.
link |
00:45:51.000
Do you have that fear yourself personally?
link |
00:45:56.000
I'm not quite sure what to think.
link |
00:45:58.000
I talked to philosophers like Nick Bostrom and Toby Ord and they think that this is a real problem.
link |
00:46:04.000
We need to worry about.
link |
00:46:07.000
Then I talked to people who work for Microsoft or DeepMind or somebody and they say,
link |
00:46:14.000
no, we're not really that close to producing AI, super intelligence.
link |
00:46:19.000
So if you look at Nick Bostrom, the argument, it's very hard to defend.
link |
00:46:25.000
So of course, I am a self engineer AI system, so I'm more with the DeepMind folks where it seems that we're really far away.
link |
00:46:32.000
But then the counter argument is, is there any fundamental reason that we'll never achieve it?
link |
00:46:39.000
And if not, then eventually there'll be a dire existential risk.
link |
00:46:44.000
So we should be concerned about it.
link |
00:46:46.000
And do you have, do you have, do you find that argument at all appealing in this domain or any domain?
link |
00:46:52.000
That eventually this will be a problem, so we should be worried about it?
link |
00:46:56.000
Yes, I think it's a problem.
link |
00:46:58.000
I think that's a valid point.
link |
00:47:03.000
Of course, when you say eventually, that raises the question, how far off is that?
link |
00:47:11.000
And is there something that we can do about it now?
link |
00:47:13.000
Because if we're talking about this is going to be 100 years in the future and you consider how rapidly our knowledge of artificial intelligence has grown in the last 10 or 20 years,
link |
00:47:23.000
it seems unlikely that there's anything much we could do now that would influence whether this is going to happen 100 years in the future.
link |
00:47:33.000
People in 80 years in the future would be in a much better position to say, this is what we need to do to prevent this happening than we are now.
link |
00:47:41.000
So to some extent, I find that reassuring.
link |
00:47:44.000
But I'm all in favor of some people doing research into this to see if indeed it is that far off or if we are in a position to do something about it sooner.
link |
00:47:55.000
I'm very much of the view that extinction is a terrible thing.
link |
00:48:00.000
And therefore, even if the risk of extinction is very small, if we can reduce that risk, that's something that we ought to do.
link |
00:48:11.000
My disagreement with some of these people who talk about long term risks, extinction risks, is only about how much priority that should have as compared to present questions.
link |
00:48:20.000
So essentially, if you look at the math of it from a utilitarian perspective, if it's existential risk so everybody dies,
link |
00:48:28.000
it feels like an infinity in the math equation that makes the math with the priorities difficult to do.
link |
00:48:39.000
That if we don't know the time scale, and you can legitimately argue that it's not zero probability that it'll happen tomorrow,
link |
00:48:48.000
that how do you deal with these kinds of existential risks like from nuclear war, from nuclear weapons, from biological weapons,
link |
00:48:57.000
from I'm not sure global warming falls into that category because global warming is a lot more gradual.
link |
00:49:04.000
And people say it's not an existential risk because there'll always be possibilities of some humans existing,
link |
00:49:10.000
farming Antarctica or Northern Siberia or something of that sort.
link |
00:49:14.000
But you don't find the complete existential risks a fundamental, like an overriding part of the equations of ethics.
link |
00:49:25.000
No, certainly if you treat it as an infinity, then it plays havoc with any calculations.
link |
00:49:32.000
But arguably we shouldn't.
link |
00:49:35.000
One of the ethical assumptions that goes into this is that the loss of future lives, that is of merely possible lives of beings who may never exist at all,
link |
00:49:46.000
is in some way comparable to the sufferings or deaths of people who do exist at some point.
link |
00:49:54.000
And that's not clear to me.
link |
00:49:57.000
I think there's a case for saying that, but I also think there's a case for taking the other view.
link |
00:50:01.000
So that has some impact on it.
link |
00:50:04.000
Of course, you might say, ah, yes, but still if there's some uncertainty about this and the costs of extinction are infinite,
link |
00:50:12.000
then still it's going to overwhelm everything else.
link |
00:50:15.000
But I suppose I'm not convinced of that.
link |
00:50:20.000
I'm not convinced that it's really infinite here.
link |
00:50:23.000
And even Nick Bostrom in his discussion of this doesn't claim that there'll be an infinite number of lives lived.
link |
00:50:31.000
What is it, 10 to the 56th or something?
link |
00:50:33.000
It's a vast number that I think he calculates.
link |
00:50:36.000
This is assuming we can upload consciousness onto these, you know, digital forms and therefore there'll be much more energy efficient.
link |
00:50:45.000
But he calculates the amount of energy in the universe or something like that.
link |
00:50:48.000
So the numbers are vast, but not infinite, which gives you some prospect maybe of resisting some of the argument.
link |
00:50:55.000
The beautiful thing with Nick's arguments is he quickly jumps from the individual scale to the universal scale,
link |
00:51:01.000
which is just awe inspiring to think of when you think about the entirety of the span of time of the universe.
link |
00:51:08.000
It's both interesting from a computer science perspective, AI perspective and from an ethical perspective, the idea of utilitarianism.
link |
00:51:15.000
Could you say what is utilitarianism?
link |
00:51:19.000
Utilitarianism is the ethical view that the right thing to do is the act that has the greatest expected utility,
link |
00:51:28.000
where what that means is it's the act that will produce the best consequences,
link |
00:51:34.000
discounted by the odds that you won't be able to produce those consequences that something will go wrong.
link |
00:51:40.000
But in simple case, let's assume we have certainty about what the consequence of actions will be,
link |
00:51:46.000
then the right action is the action that will produce the best consequences.
link |
00:51:50.000
Is that always, and by the way, there's a bunch of nuanced stuff that you talked with Sam Harris on this podcast on that people should go listen to.
link |
00:51:58.000
It's great.
link |
00:52:00.000
It's like two hours of moral philosophy discussion.
link |
00:52:03.000
But is that an easy calculation?
link |
00:52:05.000
No, it's a difficult calculation and actually there's one thing that I need to add and that is utilitarians,
link |
00:52:13.000
certainly the classical utilitarians think that by best consequences, we're talking about happiness and the absence of pain and suffering.
link |
00:52:21.000
There are other consequentialists who are not really utilitarians who say there are different things that could be good consequences.
link |
00:52:29.000
Justice, freedom, human dignity, knowledge, they all count as good consequences too.
link |
00:52:35.000
And that makes the calculations even more difficult because then you need to know how to balance these things off.
link |
00:52:40.000
If you are just talking about well being using that term to express happiness and the absence of suffering,
link |
00:52:48.000
I think that the calculation becomes more manageable in a philosophical sense.
link |
00:52:56.000
It's still in practice, we don't know how to do it.
link |
00:52:59.000
We don't know how to measure quantities of happiness and misery.
link |
00:53:02.000
We don't know how to calculate the probabilities that different actions will produce this or that.
link |
00:53:08.000
So at best we can use it as a rough guide to different actions and one where we have to focus on the short term consequences because we just can't really predict all of the longer term ramifications.
link |
00:53:25.000
So what about the extreme suffering of very small groups?
link |
00:53:32.000
Utilitarianism is focused on the overall aggregate, right?
link |
00:53:37.000
Would you say you yourself are utilitarian?
link |
00:53:41.000
Yes, I'm utilitarian.
link |
00:53:43.000
What do you make of the difficult, ethical, maybe poetic suffering of very few individuals?
link |
00:53:55.000
I think it's possible that that gets overridden by benefits to very large numbers of individuals.
link |
00:54:00.000
I think that can be the right answer.
link |
00:54:03.000
But before we conclude that it is the right answer, we have to know how severe the suffering is and how that compares with the benefits.
link |
00:54:12.000
So I tend to think that extreme suffering is worse than or is further, if you like, below the neutral level than extreme happiness or bliss is above it.
link |
00:54:27.000
So when I think about the worst experience is possible and the best experience is possible, I don't think of them as equidistant from neutral.
link |
00:54:36.000
So like it's a scale that goes from minus 100 through zero as a neutral level to plus 100.
link |
00:54:43.000
Because I know that I would not exchange an hour of my most pleasurable experiences for an hour of my most painful experiences.
link |
00:54:52.000
Even I wouldn't have an hour of my most painful experiences even for two hours or 10 hours of my most painful experiences.
link |
00:55:01.000
Did I say that correctly?
link |
00:55:03.000
Yeah, maybe 20 hours then. Is it 21? What's the exchange rate?
link |
00:55:07.000
So that's the question. What is the exchange rate? But I think it can be quite high.
link |
00:55:11.000
So that's why you shouldn't just assume that it's okay to make one person suffer extremely in order to make two people much better off.
link |
00:55:21.000
It might be a much larger number.
link |
00:55:23.000
But at some point, I do think you should aggregate and the result will be even though it violates our intuitions of justice and fairness, whatever it might be, giving priority to those who are worse off.
link |
00:55:39.000
At some point, I still think that will be the right thing to do.
link |
00:55:43.000
Yeah, some complicated nonlinear function.
link |
00:55:46.000
Can I ask a sort of out there question is the more and more we put our data out there, the more we're able to measure a bunch of factors of each of our individual human lives.
link |
00:55:55.000
And I can foresee the ability to estimate well being over the whatever we public.
link |
00:56:02.000
We together collectively agree and is a good objective function for from a utilitarian perspective.
link |
00:56:08.000
Do you think it will be possible and is a good idea to push that kind of analysis to make then public decisions, perhaps with the help of AI, that here's a tax rate, here's a tax rate at which well being will be optimized.
link |
00:56:28.000
Yeah, that would be great if we really knew that, if we really could calculate that.
link |
00:56:32.000
No, but do you think it's possible to converge towards an agreement amongst humans towards an objective function or is it just a hopeless pursuit?
link |
00:56:41.000
I don't think it's hopeless.
link |
00:56:43.000
I think it would be difficult to get converged towards agreement at least at present because some people would say, you know, I've got different views about justice and I think you ought to give priority to those who are worse off.
link |
00:56:55.000
Even though I acknowledge that the gains that the worse off are making are less than the gains that those who are sort of medium badly off could be making.
link |
00:57:05.000
So we still have all of these intuitions that we we argue about.
link |
00:57:09.000
So I don't think we would get agreement, but the fact that we wouldn't get agreement doesn't show that there isn't a right answer there.
link |
00:57:17.000
Do you think who gets to say what is right and wrong?
link |
00:57:21.000
Do you think there's place for ethics oversight from from the government?
link |
00:57:26.000
So I'm thinking in the case of AI overseeing what is what kind of decisions they can make and not.
link |
00:57:33.000
But also if you look at animal animal rights or rather not rights or perhaps rights, but the ideas you've explored in animal liberation, who gets to so you eloquently and beautifully write in your book that this here, you know, we shouldn't do this.
link |
00:57:50.000
But is there some harder rules that should be imposed?
link |
00:57:53.000
Or is this a collective thing we converse towards the society and thereby make the better and better ethical decisions?
link |
00:58:01.000
Politically, I'm still a Democrat despite looking at the flaws in democracy and the way it doesn't work always very well.
link |
00:58:10.000
So I don't see a better option than allowing the public to vote for governments in accordance with their policies.
link |
00:58:20.000
And I hope that they will vote for policies, policies that reduce the suffering of animals and reduce the suffering of distant humans, whether geographically distant or distant because they're future humans.
link |
00:58:34.000
But I recognize that democracy isn't really well set up to do that.
link |
00:58:40.000
And in a sense, you could imagine a wise and benevolent, you know, omnibenevolent leader who would do that better than democracies could.
link |
00:58:51.000
But in the world in which we live, it's difficult to imagine that this leader isn't going to be corrupted by a variety of influences.
link |
00:59:01.000
You know, we've had so many examples of people who've taken power with good intentions and then have ended up being corrupt and favoring themselves.
link |
00:59:12.000
So I don't know, you know, that's why as I say, I don't know that we have a better system than democracy to make these decisions.
link |
00:59:20.000
Well, so you also discussed effective altruism, which is a mechanism for going around government, for putting the power in the hands of the people to donate money towards causes to help, you know, to, you know, remove the middleman and give it directly to the cause that they care about.
link |
00:59:41.000
Sort of maybe this is a good time to ask you've 10 years ago wrote the life you can save that's now I think available for free online.
link |
00:59:51.000
That's right.
link |
00:59:52.000
You can download either the ebook or the audio book free from the life you can save.org.
link |
00:59:58.000
And what are the key ideas that you present in the book?
link |
01:00:03.000
The main thing I want to do in the book is to make people realize that it's not difficult to help people in extreme poverty.
link |
01:00:13.000
That there are highly effective organizations now that are doing this that they've been independently assessed and verified by research teams that are expert in this area.
link |
01:00:25.000
And that it's a fulfilling thing to do to, for at least part of your life, you know, we can't all be saints, but at least one of your goals should be to really make a positive contribution to the world and to do something to help people who through no fault of their own are in very dire circumstances and
link |
01:00:44.000
and living a life that is barely or perhaps not at all a decent life for a human being to live.
link |
01:00:52.000
So you describe a minimum ethical standard of giving what what advice would you give to people that want to be effectively altruistic in their life like live an effective altruism life.
link |
01:01:09.000
There are many different kinds of ways of living as an effective altruist.
link |
01:01:14.000
And if you're at the point where you're thinking about your long term career, I'd recommend you take a look at a website called 80,000 hours, 80,000 hours dot org, which looks at ethical career choices.
link |
01:01:27.000
And they range from, for example, going to work on Wall Street so that you can earn a huge amount of money and then donate most of it to effective charities to going to work for a really good nonprofit organization so that you can directly use your
link |
01:01:43.000
skills and ability and hard work to further a good cause or perhaps going into politics, maybe small chances but big payoffs in politics.
link |
01:01:55.000
Go to work in the public service where if you're talented you might rise to a high level where you can influence decisions.
link |
01:02:01.000
Do research in an area where the payoffs could be great.
link |
01:02:05.000
There are a lot of different opportunities but too few people are even thinking about those questions. They're just going along in some sort of preordained rut to particular careers.
link |
01:02:16.000
Maybe they think they'll earn a lot of money and have a comfortable life but they may not find that as fulfilling as actually knowing that they're making a positive difference to the world.
link |
01:02:25.000
What about in terms of, so that's like long term 80,000 hours, shorter term giving part of, well actually it's a part of that and go to work at Wall Street.
link |
01:02:37.000
If you would like to give a percentage of your income that you talk about and life you can save, I mean I was looking through it's quite a compelling, I mean I'm just a dumb engineer so I like there's simple rules.
link |
01:02:52.000
So I do actually set out suggested levels of giving because people often ask me about this.
link |
01:02:59.000
A popular answer is give 10% the traditional tithes that's recommended in Christianity and also Judaism.
link |
01:03:08.000
But why should it be the same percentage irrespective of your income?
link |
01:03:13.000
Tax scales reflect the idea that the more income you have the more you can pay tax and I think the same is true in what you can give.
link |
01:03:23.000
So I do set out a progressive donor scale which starts at 1% for people on modest incomes and rises to 33 and a third percent for people who are really earning a lot.
link |
01:03:34.000
And my idea is that I don't think any of these amounts really impose real hardship on people because they are progressive and get to income.
link |
01:03:45.000
So I think anybody can do this and can know that they're doing something significant to play their part in reducing the huge gap between people in extreme poverty in the world and people living affluent lives.
link |
01:04:01.000
And aside from it being an ethical life it's one they find more fulfilling because like there's something about our human nature that or some of our human natures maybe most of our human nature that enjoys doing the ethical thing.
link |
01:04:21.000
Yes I make both those arguments that it is an ethical requirement in the kind of world we live in today to help people in great need when we can easily do so.
link |
01:04:30.000
But also that it is a rewarding thing and there's good psychological research showing that people who give more tend to be more satisfied with their lives.
link |
01:04:40.000
And I think this has something to do with with having a purpose that's larger than yourself and therefore never being if you like never never been bored sitting around oh you know what will I do next I've got nothing to do.
link |
01:04:53.000
In a world like this there are many good things that you can do and enjoy doing them.
link |
01:04:59.000
Plus you're working with other people in the effective altruism movement who are forming a community of other people with similar ideas and they tend to be interesting thoughtful and good people as well and having friends of that sort is another big contribution to having a good life.
link |
01:05:15.000
So we talked about.
link |
01:05:18.000
Big things that are beyond ourselves but we were.
link |
01:05:22.000
We're also just human and mortal do you ponder your own mortality.
link |
01:05:27.000
Is there insights about your philosophy the ethics that you gain from.
link |
01:05:31.000
Pondering your own mortality.
link |
01:05:35.000
Clearly you know as you get into your seventies you can't help thinking about your own mortality.
link |
01:05:42.000
But I don't know that I have great insights into that from my philosophy.
link |
01:05:46.000
I don't think there's anything after the death of my body assuming that we won't be able to upload my mind into anything at the time when I die.
link |
01:05:56.000
So I don't think there's any afterlife for anything to look forward to in that sense.
link |
01:06:00.000
Fear death so if you look at Ernest Becker and.
link |
01:06:04.000
Describing the motivating aspects.
link |
01:06:08.000
Of the our ability to be cognizant of our mortality.
link |
01:06:14.000
Do you have any of those elements in your driving your motivation life.
link |
01:06:20.000
I suppose the fact that you have only a limited time to achieve the things that you want to achieve gives you some sort of motivation to.
link |
01:06:28.000
Get going and achieving them and if we thought we're immortal we might say I can put that off for another decade or two.
link |
01:06:35.000
So there's that about it but otherwise you know I'd rather have more time to do more I'd also like to be able to see.
link |
01:06:44.000
How things go that I'm interested in is climate change going to turn out to be as dire as a lot of scientists say that it is going to be.
link |
01:06:53.000
Will we somehow scrape through with less damage than we thought I'd really like to know the answers to those questions but I guess I'm not going to.
link |
01:07:02.000
Well you said there's nothing afterwards so let me ask the even more absurd question what do you think is the meaning of it all.
link |
01:07:10.000
I think the meaning of life is the meaning we give to it I don't think that we were brought into the universe for any kind of larger purpose.
link |
01:07:21.000
But given that we exist I think we can recognize that some things are objectively bad.
link |
01:07:30.000
Extreme suffering is an example and other things are objectively good like having a rich fulfilling enjoyable pleasurable life.
link |
01:07:40.000
And we can try to do our part in reducing the bad things and increasing the good things.
link |
01:07:47.000
So one way the meaning is to do a little bit more of the good things objectively good things and a little bit less of the bad things.
link |
01:07:55.000
Yes do as much of the good things as you can and as little of the bad things beautifully put I don't think there's a better place to end it thank you so much for talking today thanks very much like it's been really interesting talking to you.
link |
01:08:08.000
Thanks for listening to this conversation with Peter singer and thank you to our sponsors cash app and masterclass.
link |
01:08:15.000
Please consider supporting the podcast by downloading cash app and use the code Lex podcast and signing up and masterclass.com slash Lex.
link |
01:08:25.000
Click the links by all the stuff is the best way to support this podcast and the journey I'm on my research and startup.
link |
01:08:34.000
If you enjoy this thing subscribe on YouTube review it with 5,000 up a podcast support on Patreon or connect with me on Twitter at Lex freedman spelled without the E just F R I D M A N.
link |
01:08:48.000
And now let me leave you some words from Peter singer.
link |
01:08:52.000
Well one generation finds ridiculous the next accepts.
link |
01:08:57.000
And the third shutters when looks back what the first did.
link |
01:09:02.000
Thank you for listening and hope to see you next time.