back to index

Peter Singer: Suffering in Humans, Animals, and AI | Lex Fridman Podcast #107


small model | large model

link |
00:00:00.000
The following is a conversation with Peter Singer,
link |
00:00:03.440
professor of bioethics at Princeton University,
link |
00:00:06.200
best known for his 1975 book, Animal Liberation,
link |
00:00:10.280
that makes an ethical case against eating meat.
link |
00:00:14.240
He has written brilliantly from an ethical perspective
link |
00:00:17.680
on extreme poverty, euthanasia, human genetic selection,
link |
00:00:21.480
sports doping, the sale of kidneys,
link |
00:00:23.720
and generally happiness, including in his books,
link |
00:00:28.520
Ethics in the Real World, and The Life You Can Save.
link |
00:00:32.920
He was a key popularizer of the effective altruism movement
link |
00:00:36.320
and is generally considered one of the most influential
link |
00:00:39.240
philosophers in the world.
link |
00:00:42.200
Quick summary of the ads.
link |
00:00:43.760
Two sponsors, Cash App and Masterclass.
link |
00:00:47.080
Please consider supporting the podcast
link |
00:00:48.840
by downloading Cash App and using code LexPodcast
link |
00:00:52.200
and signing up at masterclass.com slash Lex.
link |
00:00:55.920
Click the links, buy the stuff.
link |
00:00:57.800
It really is the best way to support the podcast
link |
00:01:00.080
and the journey I'm on.
link |
00:01:02.400
As you may know, I primarily eat a ketogenic or carnivore diet,
link |
00:01:07.480
which means that most of my diet is made up of meat.
link |
00:01:10.320
I do not hunt the food I eat, though one day I hope to.
link |
00:01:15.280
I love fishing, for example.
link |
00:01:17.800
Fishing and eating the fish I catch
link |
00:01:19.680
has always felt much more honest than participating
link |
00:01:23.640
in the supply chain of factory farming.
link |
00:01:26.400
From an ethics perspective, this part of my life
link |
00:01:29.360
has always had a cloud over it.
link |
00:01:31.920
It makes me think.
link |
00:01:33.600
I've tried a few times in my life
link |
00:01:35.960
to reduce the amount of meat I eat.
link |
00:01:37.920
But for some reason, whatever the makeup of my body,
link |
00:01:41.240
whatever the way I practice the dieting I have,
link |
00:01:44.040
I get a lot of mental and physical energy
link |
00:01:48.040
and performance from eating meat.
link |
00:01:50.600
So both intellectually and physically,
link |
00:01:53.960
it's a continued journey for me.
link |
00:01:56.080
I return to Peter's work often to reevaluate the ethics
link |
00:02:00.320
of how I live this aspect of my life.
link |
00:02:03.360
Let me also say that you may be a vegan
link |
00:02:06.160
or you may be a meat eater and may be upset by the words I say
link |
00:02:09.840
or Peter says, but I ask for this podcast
link |
00:02:13.680
and other episodes of this podcast
link |
00:02:16.000
that you keep an open mind.
link |
00:02:18.240
I may and probably will talk with people you disagree with.
link |
00:02:21.640
Please try to really listen, especially
link |
00:02:25.360
to people you disagree with.
link |
00:02:27.400
And give me and the world the gift
link |
00:02:29.800
of being a participant in a patient, intelligent,
link |
00:02:33.080
and nuanced discourse.
link |
00:02:34.840
If your instinct and desire is to be a voice of mockery
link |
00:02:38.640
towards those you disagree with, please unsubscribe.
link |
00:02:42.520
My source of joy and inspiration here
link |
00:02:44.840
has been to be a part of a community that thinks deeply
link |
00:02:48.040
and speaks with empathy and compassion.
link |
00:02:51.000
That is what I hope to continue being a part of
link |
00:02:53.880
and I hope you join as well.
link |
00:02:56.200
If you enjoy this podcast, subscribe on YouTube,
link |
00:02:58.960
review it with five stars on Apple Podcast,
link |
00:03:01.360
follow on Spotify, support on Patreon,
link |
00:03:04.280
or connect with me on Twitter at Lex Friedman.
link |
00:03:07.920
As usual, I'll do a few minutes of ads now
link |
00:03:09.960
and never any ads in the middle
link |
00:03:11.320
that can break the flow of the conversation.
link |
00:03:14.040
This show is presented by Cash App,
link |
00:03:16.560
the number one finance app in the App Store.
link |
00:03:18.960
When you get it, use code LEXPODCAST.
link |
00:03:22.000
Cash App lets you send money to friends,
link |
00:03:24.280
buy Bitcoin, and invest in the stock market
link |
00:03:27.320
with as little as one dollar.
link |
00:03:29.520
Since Cash App allows you to buy Bitcoin,
link |
00:03:31.800
let me mention that cryptocurrency in the context
link |
00:03:34.600
of the history of money is fascinating.
link |
00:03:37.400
I recommend Ascent of Money
link |
00:03:39.520
as a great book on this history.
link |
00:03:41.480
Debits and credits on ledgers
link |
00:03:43.160
started around 30,000 years ago.
link |
00:03:45.960
The US dollar created over 200 years ago
link |
00:03:48.560
and the first decentralized cryptocurrency
link |
00:03:51.080
released just over 10 years ago.
link |
00:03:53.760
So given that history, cryptocurrency is still very much
link |
00:03:57.000
in its early days of development,
link |
00:03:58.720
but it's still aiming to and just might
link |
00:04:01.280
redefine the nature of money.
link |
00:04:04.320
So again, if you get Cash App from the App Store
link |
00:04:07.000
or Google Play and use the code LEXPODCAST,
link |
00:04:10.480
you get $10 and Cash App will also donate $10 to FIRST,
link |
00:04:14.920
an organization that is helping to advance
link |
00:04:16.720
robotic system education for young people around the world.
link |
00:04:20.880
This show is sponsored by Masterclass.
link |
00:04:23.440
Sign up at masterclass.com slash LEX
link |
00:04:26.080
to get a discount and to support this podcast.
link |
00:04:29.640
When I first heard about Masterclass,
link |
00:04:31.320
I thought it was too good to be true.
link |
00:04:33.160
For $180 a year, you get an all access pass
link |
00:04:36.680
to watch courses from, to list some of my favorites,
link |
00:04:40.400
Chris Hadfield on space exploration,
link |
00:04:42.920
Neil Gauss Tyson on scientific thinking and communication,
link |
00:04:46.200
Will Wright, creator of SimCity and Sims on game design.
link |
00:04:50.400
I promise I'll start streaming games at some point soon.
link |
00:04:53.880
Carlos Santana on guitar, Gary Kasparov on chess,
link |
00:04:57.520
Daniel Lagrano on poker and many more.
link |
00:05:01.600
Chris Hadfield explaining how rockets work
link |
00:05:04.240
and the experience of being launched into space alone
link |
00:05:07.280
is worth the money.
link |
00:05:08.720
By the way, you can watch it on basically any device.
link |
00:05:12.820
Once again, sign up at masterclass.com slash LEX
link |
00:05:16.600
to get a discount and to support this podcast.
link |
00:05:19.360
And now, here's my conversation with Peter Singer.
link |
00:05:25.080
When did you first become conscious of the fact
link |
00:05:27.640
that there is much suffering in the world?
link |
00:05:32.280
I think I was conscious of the fact
link |
00:05:33.760
that there's a lot of suffering in the world
link |
00:05:35.760
pretty much as soon as I was able to understand
link |
00:05:38.520
anything about my family and its background
link |
00:05:40.960
because I lost three of my four grandparents
link |
00:05:44.720
in the Holocaust and obviously I knew
link |
00:05:48.720
why I only had one grandparent
link |
00:05:52.160
and she herself had been in the camps and survived,
link |
00:05:54.560
so I think I knew a lot about that pretty early.
link |
00:05:58.120
My entire family comes from the Soviet Union.
link |
00:06:01.200
I was born in the Soviet Union.
link |
00:06:05.400
World War II has deep roots in the culture
link |
00:06:07.920
and the suffering that the war brought
link |
00:06:10.360
the millions of people who died is in the music,
link |
00:06:14.000
is in the literature, is in the culture.
link |
00:06:16.900
What do you think was the impact
link |
00:06:18.960
of the war broadly on our society?
link |
00:06:25.080
The war had many impacts.
link |
00:06:28.160
I think one of them, a beneficial impact,
link |
00:06:31.440
is that it showed what racism
link |
00:06:34.300
and authoritarian government can do
link |
00:06:37.960
and at least as far as the West was concerned,
link |
00:06:41.080
I think that meant that I grew up in an era
link |
00:06:43.200
in which there wasn't the kind of overt racism
link |
00:06:48.000
and antisemitism that had existed for my parents in Europe.
link |
00:06:52.160
I was growing up in Australia
link |
00:06:53.800
and certainly that was clearly seen
link |
00:06:57.560
as something completely unacceptable.
link |
00:07:00.560
There was also, though, a fear of a further outbreak of war
link |
00:07:05.800
which this time we expected would be nuclear
link |
00:07:08.920
because of the way the Second World War had ended,
link |
00:07:11.720
so there was this overshadowing of my childhood
link |
00:07:16.200
about the possibility that I would not live to grow up
link |
00:07:19.880
and be an adult because of a catastrophic nuclear war.
link |
00:07:25.620
The film On the Beach was made
link |
00:07:28.100
in which the city that I was living,
link |
00:07:29.860
Melbourne, was the last place on Earth
link |
00:07:32.080
to have living human beings
link |
00:07:34.320
because of the nuclear cloud
link |
00:07:36.420
that was spreading from the North,
link |
00:07:38.120
so that certainly gave us a bit of that sense.
link |
00:07:42.840
There were many, there were clearly many other legacies
link |
00:07:45.400
that we got of the war as well
link |
00:07:47.560
and the whole setup of the world
link |
00:07:49.440
and the Cold War that followed.
link |
00:07:51.600
All of that has its roots in the Second World War.
link |
00:07:55.320
There is much beauty that comes from war.
link |
00:07:58.120
Sort of, I had a conversation with Eric Weinstein.
link |
00:08:01.400
He said everything is great about war
link |
00:08:03.960
except all the death and suffering.
link |
00:08:08.200
Do you think there's something positive
link |
00:08:11.060
that came from the war,
link |
00:08:13.640
the mirror that it put to our society,
link |
00:08:16.840
sort of the ripple effects on it, ethically speaking?
link |
00:08:20.320
Do you think there are positive aspects to war?
link |
00:08:24.540
I find it hard to see positive aspects in war
link |
00:08:27.540
and some of the things that other people think of
link |
00:08:30.440
as positive and beautiful may be questioning.
link |
00:08:35.640
So there's a certain kind of patriotism.
link |
00:08:38.280
People say during wartime, we all pull together,
link |
00:08:41.040
we all work together against a common enemy
link |
00:08:44.080
and that's true.
link |
00:08:45.300
An outside enemy does unite a country
link |
00:08:47.380
and in general, it's good for countries to be united
link |
00:08:49.920
and have common purposes
link |
00:08:51.080
but it also engenders a kind of a nationalism
link |
00:08:55.360
and a patriotism that can't be questioned
link |
00:08:57.760
and that I'm more skeptical about.
link |
00:09:01.960
What about the brotherhood
link |
00:09:04.560
that people talk about from soldiers?
link |
00:09:08.240
The sort of counterintuitive, sad idea
link |
00:09:12.960
that the closest that people feel to each other
link |
00:09:16.240
is in those moments of suffering,
link |
00:09:17.880
of being at the sort of the edge
link |
00:09:20.360
of seeing your comrades dying in your arms.
link |
00:09:24.980
That somehow brings people extremely closely together.
link |
00:09:27.440
Suffering brings people closer together.
link |
00:09:29.600
How do you make sense of that?
link |
00:09:31.920
It may bring people close together
link |
00:09:33.520
but there are other ways of bonding
link |
00:09:36.440
and being close to people I think
link |
00:09:37.840
without the suffering and death that war entails.
link |
00:09:42.840
Perhaps you could see, you could already hear
link |
00:09:44.560
the romanticized Russian in me.
link |
00:09:48.160
We tend to romanticize suffering just a little bit
link |
00:09:50.280
in our literature and culture and so on.
link |
00:09:53.440
Could you take a step back
link |
00:09:54.880
and I apologize if it's a ridiculous question
link |
00:09:57.560
but what is suffering?
link |
00:09:59.640
If you would try to define what suffering is,
link |
00:10:03.760
how would you go about it?
link |
00:10:05.560
Suffering is a conscious state.
link |
00:10:09.640
There can be no suffering for a being
link |
00:10:11.360
who is completely unconscious
link |
00:10:14.520
and it's distinguished from other conscious states
link |
00:10:17.940
in terms of being one that considered just in itself.
link |
00:10:22.940
We would rather be without.
link |
00:10:25.500
It's a conscious state that we want to stop
link |
00:10:27.500
if we're experiencing or we want to avoid having again
link |
00:10:31.780
if we've experienced it in the past.
link |
00:10:34.500
And that's, as I say, emphasized for its own sake
link |
00:10:37.400
because of course people will say,
link |
00:10:39.340
well, suffering strengthens the spirit.
link |
00:10:41.580
It has good consequences.
link |
00:10:44.340
And sometimes it does have those consequences
link |
00:10:47.100
and of course sometimes we might undergo suffering.
link |
00:10:50.780
We set ourselves a challenge to run a marathon
link |
00:10:53.700
or climb a mountain or even just to go to the dentist
link |
00:10:57.260
so that the toothache doesn't get worse
link |
00:10:59.100
even though we know the dentist is gonna hurt us
link |
00:11:00.900
to some extent.
link |
00:11:01.940
So I'm not saying that we never choose suffering
link |
00:11:04.520
but I am saying that other things being equal,
link |
00:11:07.260
we would rather not be in that state of consciousness.
link |
00:11:10.660
Is the ultimate goal sort of,
link |
00:11:12.380
you have the new 10 year anniversary release
link |
00:11:15.820
of the Life You Can Save book, really influential book.
link |
00:11:18.860
We'll talk about it a bunch of times
link |
00:11:20.700
throughout this conversation
link |
00:11:21.780
but do you think it's possible
link |
00:11:25.340
to eradicate suffering or is that the goal
link |
00:11:29.820
or do we want to achieve a kind of minimum threshold
link |
00:11:36.820
of suffering and then keeping a little drop of poison
link |
00:11:43.860
to keep things interesting in the world?
link |
00:11:46.160
In practice, I don't think we ever will eliminate suffering
link |
00:11:50.120
so I think that little drop of poison as you put it
link |
00:11:53.000
or if you like the contrasting dash of an unpleasant color
link |
00:11:58.680
perhaps something like that
link |
00:11:59.680
in a otherwise harmonious and beautiful composition,
link |
00:12:04.040
that is gonna always be there.
link |
00:12:07.240
If you ask me whether in theory
link |
00:12:09.140
if we could get rid of it, we should.
link |
00:12:12.640
I think the answer is whether in fact
link |
00:12:14.680
we would be better off
link |
00:12:17.760
or whether in terms of by eliminating the suffering
link |
00:12:20.240
we would also eliminate some of the highs,
link |
00:12:22.520
the positive highs and if that's so
link |
00:12:24.880
then we might be prepared to say
link |
00:12:27.360
it's worth having a minimum of suffering
link |
00:12:30.600
in order to have the best possible experiences as well.
link |
00:12:34.560
Is there a relative aspect to suffering?
link |
00:12:37.680
So when you talk about eradicating poverty in the world,
link |
00:12:42.680
is this the more you succeed,
link |
00:12:44.920
the more the bar of what defines poverty raises
link |
00:12:47.760
or is there at the basic human ethical level
link |
00:12:51.360
a bar that's absolute that once you get above it
link |
00:12:55.000
then we can morally converge
link |
00:13:00.000
to feeling like we have eradicated poverty?
link |
00:13:04.280
I think they're both and I think this is true for poverty
link |
00:13:08.160
as well as suffering.
link |
00:13:09.000
There's an objective level of suffering or of poverty
link |
00:13:14.280
where we're talking about objective indicators
link |
00:13:17.720
like you're constantly hungry,
link |
00:13:22.360
you can't get enough food,
link |
00:13:24.000
you're constantly cold, you can't get warm,
link |
00:13:28.600
you have some physical pains that you're never rid of.
link |
00:13:32.600
I think those things are objective
link |
00:13:35.080
but it may also be true that if you do get rid of it
link |
00:13:38.520
if you do get rid of that and you get to the stage
link |
00:13:40.840
where all of those basic needs have been met,
link |
00:13:45.280
there may still be then new forms of suffering that develop
link |
00:13:48.760
and perhaps that's what we're seeing
link |
00:13:50.400
in the affluent societies we have
link |
00:13:52.720
that people get bored for example,
link |
00:13:55.680
they don't need to spend so many hours a day earning money
link |
00:13:58.920
to get enough to eat and shelter.
link |
00:14:01.400
So now they're bored, they lack a sense of purpose.
link |
00:14:05.120
That can happen.
link |
00:14:06.360
And that then is a kind of a relative suffering
link |
00:14:10.440
that is distinct from the objective forms of suffering.
link |
00:14:14.320
But in your focus on eradicating suffering,
link |
00:14:17.520
you don't think about that kind of,
link |
00:14:19.960
the kind of interesting challenges and suffering
link |
00:14:22.520
that emerges in affluent societies,
link |
00:14:24.400
that's just not, in your ethical philosophical brain,
link |
00:14:28.800
is that of interest at all?
link |
00:14:31.240
It would be of interest to me if we had eliminated
link |
00:14:34.120
all of the objective forms of suffering,
link |
00:14:36.480
which I think of as generally more severe
link |
00:14:40.240
and also perhaps easier at this stage anyway
link |
00:14:43.200
to know how to eliminate.
link |
00:14:45.000
So yes, in some future state when we've eliminated
link |
00:14:49.160
those objective forms of suffering,
link |
00:14:50.560
I would be interested in trying to eliminate
link |
00:14:53.080
the relative forms as well.
link |
00:14:55.920
But that's not a practical need for me at the moment.
link |
00:14:59.920
Sorry to linger on it because you kind of said it,
link |
00:15:02.400
but just is elimination the goal for the affluent society?
link |
00:15:07.640
So is there, do you see suffering as a creative force?
link |
00:15:14.400
Suffering can be a creative force.
link |
00:15:17.120
I think repeating what I said about the highs
link |
00:15:20.560
and whether we need some of the lows
link |
00:15:22.240
to experience the highs.
link |
00:15:24.120
So it may be that suffering makes us more creative
link |
00:15:26.560
and we regard that as worthwhile.
link |
00:15:29.840
Maybe that brings some of those highs with it
link |
00:15:32.920
that we would not have had if we'd had no suffering.
link |
00:15:36.680
I don't really know.
link |
00:15:37.720
Many people have suggested that
link |
00:15:39.520
and I certainly can't have no basis for denying it.
link |
00:15:44.840
And if it's true, then I would not want
link |
00:15:47.800
to eliminate suffering completely.
link |
00:15:50.920
But the focus is on the absolute,
link |
00:15:54.000
not to be cold, not to be hungry.
link |
00:15:56.840
Yes, that's at the present stage
link |
00:15:59.800
of where the world's population is, that's the focus.
link |
00:16:03.920
Talking about human nature for a second,
link |
00:16:06.360
do you think people are inherently good
link |
00:16:08.440
or do we all have good and evil in us
link |
00:16:11.000
that basically everyone is capable of evil
link |
00:16:14.880
based on the environment?
link |
00:16:17.400
Certainly most of us have potential for both good and evil.
link |
00:16:21.480
I'm not prepared to say that everyone is capable of evil.
link |
00:16:24.280
Maybe some people who even in the worst of circumstances
link |
00:16:27.160
would not be capable of it,
link |
00:16:28.880
but most of us are very susceptible
link |
00:16:32.400
to environmental influences.
link |
00:16:34.520
So when we look at things
link |
00:16:36.520
that we were talking about previously,
link |
00:16:37.880
let's say what the Nazis did during the Holocaust,
link |
00:16:43.640
I think it's quite difficult to say,
link |
00:16:46.600
I know that I would not have done those things
link |
00:16:50.200
even if I were in the same circumstances
link |
00:16:52.640
as those who did them.
link |
00:16:54.480
Even if let's say I had grown up under the Nazi regime
link |
00:16:58.280
and had been indoctrinated with racist ideas,
link |
00:17:02.480
had also had the idea that I must obey orders,
link |
00:17:07.160
follow the commands of the Fuhrer,
link |
00:17:11.040
plus of course perhaps the threat
link |
00:17:12.480
that if I didn't do certain things,
link |
00:17:14.520
I might get sent to the Russian front
link |
00:17:16.560
and that would be a pretty grim fate.
link |
00:17:19.200
I think it's really hard for anybody to say,
link |
00:17:22.720
nevertheless, I know I would not have killed those Jews
link |
00:17:26.720
or whatever else it was that they were.
link |
00:17:28.440
Well, what's your intuition?
link |
00:17:29.400
How many people will be able to say that?
link |
00:17:32.440
Truly to be able to say it,
link |
00:17:34.920
I think very few, less than 10%.
link |
00:17:37.680
To me, it seems a very interesting
link |
00:17:39.680
and powerful thing to meditate on.
link |
00:17:42.080
So I've read a lot about the war, World War II,
link |
00:17:45.800
and I can't escape the thought
link |
00:17:47.880
that I would have not been one of the 10%.
link |
00:17:51.640
Right, I have to say, I simply don't know.
link |
00:17:55.440
I would like to hope that I would have been one of the 10%,
link |
00:17:59.000
but I don't really have any basis
link |
00:18:00.920
for claiming that I would have been different
link |
00:18:04.280
from the majority.
link |
00:18:06.160
Is it a worthwhile thing to contemplate?
link |
00:18:09.520
It would be interesting if we could find a way
link |
00:18:11.360
of really finding these answers.
link |
00:18:13.920
There obviously is quite a bit of research
link |
00:18:16.600
on people during the Holocaust,
link |
00:18:19.760
on how ordinary Germans got led to do terrible things,
link |
00:18:24.840
and there are also studies of the resistance,
link |
00:18:28.160
some heroic people in the White Rose group, for example,
link |
00:18:32.400
who resisted even though they knew
link |
00:18:34.720
they were likely to die for it.
link |
00:18:37.960
But I don't know whether these studies
link |
00:18:40.080
really can answer your larger question
link |
00:18:43.160
of how many people would have been capable of doing that.
link |
00:18:47.720
Well, sort of the reason I think is interesting
link |
00:18:50.360
is in the world, as you described,
link |
00:18:55.120
when there are things that you'd like to do that are good,
link |
00:18:59.920
that are objectively good,
link |
00:19:02.280
it's useful to think about whether
link |
00:19:04.800
I'm not willing to do something,
link |
00:19:06.720
or I'm not willing to acknowledge something
link |
00:19:09.000
as good and the right thing to do
link |
00:19:10.760
because I'm simply scared of putting my life,
link |
00:19:15.920
of damaging my life in some kind of way.
link |
00:19:18.920
And that kind of thought exercise is helpful
link |
00:19:20.720
to understand what is the right thing
link |
00:19:23.400
in my current skill set and the capacity to do.
link |
00:19:27.400
Sort of there's things that are convenient,
link |
00:19:30.000
and I wonder if there are things
link |
00:19:31.920
that are highly inconvenient,
link |
00:19:33.640
where I would have to experience derision,
link |
00:19:35.560
or hatred, or death, or all those kinds of things,
link |
00:19:39.640
but it's truly the right thing to do.
link |
00:19:41.200
And that kind of balance is,
link |
00:19:43.800
I feel like in America, we don't have,
link |
00:19:46.560
it's difficult to think in the current times,
link |
00:19:50.000
it seems easier to put yourself back in history,
link |
00:19:53.360
where you can sort of objectively contemplate
link |
00:19:56.280
whether, how willing you are to do the right thing
link |
00:19:59.880
when the cost is high.
link |
00:20:03.000
True, but I think we do face those challenges today,
link |
00:20:06.080
and I think we can still ask ourselves those questions.
link |
00:20:09.960
So one stand that I took more than 40 years ago now
link |
00:20:13.480
was to stop eating meat, become a vegetarian at a time
link |
00:20:17.520
when you hardly met anybody who was a vegetarian,
link |
00:20:21.360
or if you did, they might've been a Hindu,
link |
00:20:23.760
or they might've had some weird theories
link |
00:20:27.560
about meat and health.
link |
00:20:30.160
And I know thinking about making that decision,
link |
00:20:33.240
I was convinced that it was the right thing to do,
link |
00:20:35.280
but I still did have to think,
link |
00:20:37.240
are all my friends gonna think that I'm a crank
link |
00:20:40.080
because I'm now refusing to eat meat?
link |
00:20:43.960
So I'm not saying there were any terrible sanctions,
link |
00:20:47.760
obviously, but I thought about that,
link |
00:20:50.000
and I guess I decided,
link |
00:20:51.600
well, I still think this is the right thing to do,
link |
00:20:54.080
and I'll put up with that if it happens.
link |
00:20:56.320
And one or two friends were clearly uncomfortable
link |
00:20:59.080
with that decision, but that was pretty minor
link |
00:21:03.480
compared to the historical examples
link |
00:21:05.840
that we've been talking about.
link |
00:21:08.040
But other issues that we have around too,
link |
00:21:09.840
like global poverty and what we ought to be doing about that
link |
00:21:13.800
is another question where people, I think,
link |
00:21:16.880
can have the opportunity to take a stand
link |
00:21:19.080
on what's the right thing to do now.
link |
00:21:21.040
Climate change would be a third question
link |
00:21:23.200
where, again, people are taking a stand.
link |
00:21:25.680
I can look at Greta Thunberg there and say,
link |
00:21:29.120
well, I think it must've taken a lot of courage
link |
00:21:32.360
for a schoolgirl to say,
link |
00:21:35.240
I'm gonna go on strike about climate change
link |
00:21:37.160
and see what happens.
link |
00:21:41.200
Yeah, especially in this divisive world,
link |
00:21:42.960
she gets exceptionally huge amounts of support
link |
00:21:45.560
and hatred, both.
link |
00:21:47.400
That's right.
link |
00:21:48.240
Which is very difficult for a teenager to operate in.
link |
00:21:53.920
In your book, Ethics in the Real World,
link |
00:21:56.080
amazing book, people should check it out.
link |
00:21:57.880
Very easy read.
link |
00:21:59.600
82 brief essays on things that matter.
link |
00:22:02.800
One of the essays asks, should robots have rights?
link |
00:22:06.920
You've written about this,
link |
00:22:07.920
so let me ask, should robots have rights?
link |
00:22:11.560
If we ever develop robots capable of consciousness,
link |
00:22:17.080
capable of having their own internal perspective
link |
00:22:20.520
on what's happening to them
link |
00:22:22.080
so that their lives can go well or badly for them,
link |
00:22:25.600
then robots should have rights.
link |
00:22:27.720
Until that happens, they shouldn't.
link |
00:22:31.000
So is consciousness essentially a prerequisite to suffering?
link |
00:22:36.160
So everything that possesses consciousness
link |
00:22:41.520
is capable of suffering, put another way.
link |
00:22:43.920
And if so, what is consciousness?
link |
00:22:48.440
I certainly think that consciousness
link |
00:22:51.320
is a prerequisite for suffering.
link |
00:22:53.080
You can't suffer if you're not conscious.
link |
00:22:58.040
But is it true that every being that is conscious
link |
00:23:02.160
will suffer or has to be capable of suffering?
link |
00:23:05.400
I suppose you could imagine a kind of consciousness,
link |
00:23:08.200
especially if we can construct it artificially,
link |
00:23:10.920
that's capable of experiencing pleasure
link |
00:23:13.840
but just automatically cuts out the consciousness
link |
00:23:16.720
when they're suffering.
link |
00:23:18.240
So they're like an instant anesthesia
link |
00:23:20.400
as soon as something is gonna cause you suffering.
link |
00:23:22.520
So that's possible.
link |
00:23:25.120
But doesn't exist as far as we know on this planet yet.
link |
00:23:31.280
You asked what is consciousness.
link |
00:23:34.680
Philosophers often talk about it
link |
00:23:36.440
as there being a subject of experiences.
link |
00:23:39.520
So you and I and everybody listening to this
link |
00:23:42.920
is a subject of experience.
link |
00:23:44.680
There is a conscious subject who is taking things in,
link |
00:23:48.640
responding to it in various ways,
link |
00:23:51.320
feeling good about it, feeling bad about it.
link |
00:23:54.720
And that's different from the kinds
link |
00:23:57.400
of artificial intelligence we have now.
link |
00:24:00.600
I take out my phone.
link |
00:24:03.000
I ask Google directions to where I'm going.
link |
00:24:06.840
Google gives me the directions
link |
00:24:08.680
and I choose to take a different way.
link |
00:24:10.840
Google doesn't care.
link |
00:24:11.840
It's not like I'm offending Google or anything like that.
link |
00:24:14.080
There is no subject of experiences there.
link |
00:24:16.520
And I think that's the indication
link |
00:24:19.360
that Google AI we have now is not conscious
link |
00:24:24.480
or at least that level of AI is not conscious.
link |
00:24:27.560
And that's the way to think about it.
link |
00:24:28.880
Now, it may be difficult to tell, of course,
link |
00:24:31.040
whether a certain AI is or isn't conscious.
link |
00:24:34.080
It may mimic consciousness
link |
00:24:35.280
and we can't tell if it's only mimicking it
link |
00:24:37.360
or if it's the real thing.
link |
00:24:39.120
But that's what we're looking for.
link |
00:24:40.600
Is there a subject of experience,
link |
00:24:43.480
a perspective on the world from which things can go well
link |
00:24:47.080
or badly from that perspective?
link |
00:24:50.160
So our idea of what suffering looks like
link |
00:24:54.200
comes from just watching ourselves when we're in pain.
link |
00:25:01.200
Or when we're experiencing pleasure, it's not only.
link |
00:25:03.360
Pleasure and pain.
link |
00:25:04.600
Yes, so and then you could actually,
link |
00:25:07.880
you could push back on us, but I would say
link |
00:25:09.400
that's how we kind of build an intuition about animals
link |
00:25:14.280
is we can infer the similarities between humans and animals
link |
00:25:18.520
and so infer that they're suffering or not
link |
00:25:21.000
based on certain things and they're conscious or not.
link |
00:25:24.320
So what if robots, you mentioned Google Maps
link |
00:25:31.040
and I've done this experiment.
link |
00:25:32.520
So I work in robotics just for my own self
link |
00:25:35.080
or I have several Roomba robots
link |
00:25:37.640
and I play with different speech interaction,
link |
00:25:40.960
voice based interaction.
link |
00:25:42.160
And if the Roomba or the robot or Google Maps
link |
00:25:47.120
shows any signs of pain, like screaming or moaning
link |
00:25:50.360
or being displeased by something you've done,
link |
00:25:54.240
that in my mind, I can't help but immediately upgrade it.
link |
00:25:59.440
And even when I myself programmed it in,
link |
00:26:02.520
just having another entity that's now for the moment
link |
00:26:06.040
disjoint from me showing signs of pain
link |
00:26:09.080
makes me feel like it is conscious.
link |
00:26:11.120
Like I immediately, then the whatever,
link |
00:26:15.440
I immediately realize that it's not obviously,
link |
00:26:17.800
but that feeling is there.
link |
00:26:19.640
So sort of, I guess, what do you think about a world
link |
00:26:26.400
where Google Maps and Roombas are pretending to be conscious
link |
00:26:32.080
and we descendants of apes are not smart enough
link |
00:26:35.360
to realize they're not or whatever, or that is conscious,
link |
00:26:39.080
they appear to be conscious.
link |
00:26:40.720
And so you then have to give them rights.
link |
00:26:44.000
The reason I'm asking that is that kind of capability
link |
00:26:47.120
may be closer than we realize.
link |
00:26:52.280
Yes, that kind of capability may be closer,
link |
00:26:58.400
but I don't think it follows
link |
00:26:59.720
that we have to give them rights.
link |
00:27:00.920
I suppose the argument for saying that in those circumstances
link |
00:27:05.400
we should give them rights is that if we don't,
link |
00:27:07.800
we'll harden ourselves against other beings
link |
00:27:11.920
who are not robots and who really do suffer.
link |
00:27:15.200
That's a possibility that, you know,
link |
00:27:17.880
if we get used to looking at a being suffering
link |
00:27:20.880
and saying, yeah, we don't have to do anything about that,
link |
00:27:23.440
that being doesn't have any rights,
link |
00:27:25.000
maybe we'll feel the same about animals, for instance.
link |
00:27:29.240
And interestingly, among philosophers and thinkers
link |
00:27:34.240
who denied that we have any direct duties to animals,
link |
00:27:39.720
and this includes people like Thomas Aquinas
link |
00:27:41.840
and Immanuel Kant, they did say, yes,
link |
00:27:46.640
but still it's better not to be cruel to them,
link |
00:27:48.960
not because of the suffering we're inflicting
link |
00:27:50.880
on the animals, but because if we are,
link |
00:27:54.280
we may develop a cruel disposition
link |
00:27:56.440
and this will be bad for humans, you know,
link |
00:28:00.000
because we're more likely to be cruel to other humans
link |
00:28:02.080
and that would be wrong.
link |
00:28:03.760
So.
link |
00:28:06.080
But you don't accept that kind of.
link |
00:28:07.760
I don't accept that as the basis of the argument
link |
00:28:10.160
for why we shouldn't be cruel to animals.
link |
00:28:11.600
I think the basis of the argument
link |
00:28:12.680
for why we shouldn't be cruel to animals
link |
00:28:14.000
is just that we're inflicting suffering on them
link |
00:28:16.440
and the suffering is a bad thing.
link |
00:28:19.160
But possibly I might accept some sort of parallel
link |
00:28:23.000
of that argument as a reason why you shouldn't be cruel
link |
00:28:26.040
to these robots that mimic the symptoms of pain
link |
00:28:30.880
if it's gonna be harder for us to distinguish.
link |
00:28:33.520
I would venture to say, I'd like to disagree with you
link |
00:28:36.760
and with most people, I think,
link |
00:28:39.680
at the risk of sounding crazy,
link |
00:28:42.240
I would like to say that if that Roomba is dedicated
link |
00:28:47.840
to faking the consciousness and the suffering,
link |
00:28:50.840
I think it will be impossible for us.
link |
00:28:55.920
I would like to apply the same argument
link |
00:28:58.440
as with animals to robots,
link |
00:29:00.480
that they deserve rights in that sense.
link |
00:29:02.880
Now we might outlaw the addition
link |
00:29:05.880
of those kinds of features into Roombas,
link |
00:29:07.600
but once you do, I think I'm quite surprised
link |
00:29:13.000
by the upgrade in consciousness
link |
00:29:16.800
that the display of suffering creates.
link |
00:29:20.640
It's a totally open world,
link |
00:29:22.360
but I'd like to just sort of the difference
link |
00:29:25.600
between animals and other humans is that in the robot case,
link |
00:29:29.480
we've added it in ourselves.
link |
00:29:32.440
Therefore, we can say something about how real it is.
link |
00:29:37.560
But I would like to say that the display of it
link |
00:29:40.160
is what makes it real.
link |
00:29:41.920
And I'm not a philosopher, I'm not making that argument,
link |
00:29:45.560
but I'd at least like to add that as a possibility.
link |
00:29:49.080
And I've been surprised by it
link |
00:29:50.920
is all I'm trying to sort of articulate poorly, I suppose.
link |
00:29:55.160
So there is a philosophical view
link |
00:29:59.080
has been held about humans,
link |
00:30:00.760
which is rather like what you're talking about,
link |
00:30:02.480
and that's behaviorism.
link |
00:30:04.760
So behaviorism was employed both in psychology,
link |
00:30:07.480
people like BF Skinner was a famous behaviorist,
link |
00:30:10.240
but in psychology, it was more a kind of a,
link |
00:30:14.760
what is it that makes this science?
link |
00:30:16.360
Well, you need to have behavior
link |
00:30:17.480
because that's what you can observe,
link |
00:30:18.680
you can't observe consciousness.
link |
00:30:21.200
But in philosophy, the view just defended
link |
00:30:23.440
by people like Gilbert Ryle,
link |
00:30:24.800
who was a professor of philosophy at Oxford,
link |
00:30:26.440
wrote a book called The Concept of Mind,
link |
00:30:28.480
in which in this kind of phase,
link |
00:30:32.000
this is in the 40s of linguistic philosophy,
link |
00:30:35.280
he said, well, the meaning of a term is its use,
link |
00:30:38.920
and we use terms like so and so is in pain
link |
00:30:42.440
when we see somebody writhing or screaming
link |
00:30:44.840
or trying to escape some stimulus,
link |
00:30:47.080
and that's the meaning of the term.
link |
00:30:48.400
So that's what it is to be in pain,
link |
00:30:50.440
and you point to the behavior.
link |
00:30:54.720
And Norman Malcolm, who was another philosopher
link |
00:30:58.400
in the school from Cornell, had the view that,
link |
00:31:02.920
so what is it to dream?
link |
00:31:04.600
After all, we can't see other people's dreams.
link |
00:31:07.960
Well, when people wake up and say,
link |
00:31:10.880
I've just had a dream of, here I was,
link |
00:31:14.080
undressed, walking down the main street
link |
00:31:15.720
or whatever it is you've dreamt,
link |
00:31:17.760
that's what it is to have a dream.
link |
00:31:19.040
It's basically to wake up and recall something.
link |
00:31:22.720
So you could apply this to what you're talking about
link |
00:31:25.640
and say, so what it is to be in pain
link |
00:31:28.480
is to exhibit these symptoms of pain behavior,
link |
00:31:31.040
and therefore, these robots are in pain.
link |
00:31:34.920
That's what the word means.
link |
00:31:36.840
But nowadays, not many people think
link |
00:31:38.520
that Ryle's kind of philosophical behaviorism
link |
00:31:40.880
is really very plausible,
link |
00:31:42.320
so I think they would say the same about your view.
link |
00:31:45.080
So, yes, I just spoke with Noam Chomsky,
link |
00:31:48.600
who basically was part of dismantling
link |
00:31:52.760
the behaviorist movement.
link |
00:31:54.800
But, and I'm with that 100% for studying human behavior,
link |
00:32:00.600
but I am one of the few people in the world
link |
00:32:04.080
who has made Roombas scream in pain.
link |
00:32:09.480
And I just don't know what to do
link |
00:32:12.200
with that empirical evidence,
link |
00:32:14.520
because it's hard, sort of philosophically, I agree.
link |
00:32:19.760
But the only reason I philosophically agree in that case
link |
00:32:23.240
is because I was the programmer.
link |
00:32:25.040
But if somebody else was a programmer,
link |
00:32:26.760
I'm not sure I would be able to interpret that well.
link |
00:32:29.280
So I think it's a new world
link |
00:32:34.320
that I was just curious what your thoughts are.
link |
00:32:37.480
For now, you feel that the display
link |
00:32:42.280
of what we can kind of intellectually say
link |
00:32:46.400
is a fake display of suffering is not suffering.
link |
00:32:50.120
That's right, that would be my view.
link |
00:32:53.240
But that's consistent, of course,
link |
00:32:54.480
with the idea that it's part of our nature
link |
00:32:56.920
to respond to this display
link |
00:32:58.680
if it's reasonably authentically done.
link |
00:33:02.600
And therefore it's understandable
link |
00:33:04.800
that people would feel this,
link |
00:33:06.240
and maybe, as I said, it's even a good thing
link |
00:33:09.880
that they do feel it,
link |
00:33:10.720
and you wouldn't want to harden yourself against it
link |
00:33:12.640
because then you might harden yourself
link |
00:33:14.440
against being sort of really suffering.
link |
00:33:17.240
But there's this line, so you said,
link |
00:33:20.160
once artificial general intelligence system,
link |
00:33:22.880
a human level intelligence system become conscious,
link |
00:33:25.760
I guess if I could just linger on it,
link |
00:33:28.480
now I've wrote really dumb programs
link |
00:33:30.720
that just say things that I told them to say,
link |
00:33:33.760
but how do you know when a system like Alexa,
link |
00:33:38.320
which is sufficiently complex
link |
00:33:39.720
that you can't introspect to how it works,
link |
00:33:42.040
starts giving you signs of consciousness
link |
00:33:46.200
through natural language?
link |
00:33:48.000
That there's a feeling,
link |
00:33:49.800
there's another entity there that's self aware,
link |
00:33:52.560
that has a fear of death, a mortality,
link |
00:33:55.080
that has awareness of itself
link |
00:33:57.840
that we kind of associate with other living creatures.
link |
00:34:03.160
I guess I'm sort of trying to do the slippery slope
link |
00:34:05.680
from the very naive thing where I started
link |
00:34:07.880
into something where it's sufficiently a black box
link |
00:34:12.120
to where it's starting to feel like it's conscious.
link |
00:34:16.120
Where's that threshold
link |
00:34:17.960
where you would start getting uncomfortable
link |
00:34:20.240
with the idea of robot suffering, do you think?
link |
00:34:25.080
I don't know enough about the programming
link |
00:34:27.640
that we're going to this really to answer this question.
link |
00:34:31.600
But I presume that somebody who does know more about this
link |
00:34:34.880
could look at the program
link |
00:34:37.360
and see whether we can explain the behaviors
link |
00:34:41.480
in a parsimonious way that doesn't require us
link |
00:34:45.360
to suggest that some sort of consciousness has emerged.
link |
00:34:50.080
Or alternatively, whether you're in a situation
link |
00:34:52.400
where you say, I don't know how this is happening,
link |
00:34:56.280
the program does generate a kind of artificial
link |
00:35:00.160
general intelligence which is autonomous,
link |
00:35:04.200
starts to do things itself and is autonomous
link |
00:35:06.360
of the basics programming that set it up.
link |
00:35:10.400
And so it's quite possible that actually
link |
00:35:13.400
we have achieved consciousness
link |
00:35:15.800
in a system of artificial intelligence.
link |
00:35:18.600
Sort of the approach that I work with,
link |
00:35:20.640
most of the community is really excited about now
link |
00:35:22.680
is with learning methods, so machine learning.
link |
00:35:26.000
And the learning methods are unfortunately
link |
00:35:27.960
are not capable of revealing,
link |
00:35:31.440
which is why somebody like Noam Chomsky criticizes them.
link |
00:35:34.120
You create powerful systems that are able
link |
00:35:36.080
to do certain things without understanding
link |
00:35:38.240
the theory, the physics, the science of how it works.
link |
00:35:42.160
And so it's possible if those are the kinds
link |
00:35:44.840
of methods that succeed, we won't be able
link |
00:35:46.760
to know exactly, sort of try to reduce,
link |
00:35:53.000
try to find whether this thing is conscious or not,
link |
00:35:56.200
this thing is intelligent or not.
link |
00:35:58.120
It's simply giving, when we talk to it,
link |
00:36:01.760
it displays wit and humor and cleverness
link |
00:36:05.800
and emotion and fear, and then we won't be able
link |
00:36:10.200
to say where in the billions of nodes,
link |
00:36:13.920
neurons in this artificial neural network
link |
00:36:16.400
is the fear coming from.
link |
00:36:20.020
So in that case, that's a really interesting place
link |
00:36:22.440
where we do now start to return to behaviorism and say.
link |
00:36:28.480
Yeah, that is an interesting issue.
link |
00:36:33.860
I would say that if we have serious doubts
link |
00:36:36.960
and think it might be conscious,
link |
00:36:39.440
then we ought to try to give it the benefit
link |
00:36:41.840
of the doubt, just as I would say with animals.
link |
00:36:45.360
I think we can be highly confident
link |
00:36:46.880
that vertebrates are conscious,
link |
00:36:50.460
but when we get down, and some invertebrates
link |
00:36:53.480
like the octopus, but with insects,
link |
00:36:56.920
it's much harder to be confident of that.
link |
00:37:01.480
I think we should give them the benefit
link |
00:37:02.760
of the doubt where we can, which means,
link |
00:37:06.300
I think it would be wrong to torture an insect,
link |
00:37:09.000
but it doesn't necessarily mean it's wrong
link |
00:37:11.800
to slap a mosquito that's about to bite you
link |
00:37:14.800
and stop you getting to sleep.
link |
00:37:16.300
So I think you try to achieve some balance
link |
00:37:20.100
in these circumstances of uncertainty.
link |
00:37:22.960
If it's okay with you, if we can go back just briefly.
link |
00:37:26.440
So 44 years ago, like you mentioned, 40 plus years ago,
link |
00:37:29.640
you've written Animal Liberation,
link |
00:37:31.200
the classic book that started,
link |
00:37:33.560
that launched, that was the foundation
link |
00:37:36.440
of the movement of Animal Liberation.
link |
00:37:40.640
Can you summarize the key set of ideas
link |
00:37:42.440
that underpin that book?
link |
00:37:44.360
Certainly, the key idea that underlies that book
link |
00:37:49.000
is the concept of speciesism,
link |
00:37:52.200
which I did not invent that term.
link |
00:37:54.760
I took it from a man called Richard Rider,
link |
00:37:56.720
who was in Oxford when I was,
link |
00:37:58.600
and I saw a pamphlet that he'd written
link |
00:38:00.240
about experiments on chimpanzees that used that term.
link |
00:38:05.240
But I think I contributed
link |
00:38:06.240
to making it philosophically more precise
link |
00:38:08.800
and to getting it into a broader audience.
link |
00:38:12.040
And the idea is that we have a bias or a prejudice
link |
00:38:16.760
against taking seriously the interests of beings
link |
00:38:20.400
who are not members of our species.
link |
00:38:23.440
Just as in the past, Europeans, for example,
link |
00:38:26.920
had a bias against taking seriously
link |
00:38:28.600
the interests of Africans, racism.
link |
00:38:31.600
And men have had a bias against taking seriously
link |
00:38:34.080
the interests of women, sexism.
link |
00:38:37.320
So I think something analogous, not completely identical,
link |
00:38:41.320
but something analogous goes on
link |
00:38:44.320
and has gone on for a very long time
link |
00:38:46.640
with the way humans see themselves vis a vis animals.
link |
00:38:50.440
We see ourselves as more important.
link |
00:38:55.000
We see animals as existing to serve our needs
link |
00:38:58.320
in various ways.
link |
00:38:59.380
And you're gonna find this very explicit
link |
00:39:00.760
in earlier philosophers from Aristotle
link |
00:39:04.480
through to Kant and others.
link |
00:39:07.080
And either we don't need to take their interests
link |
00:39:12.040
into account at all,
link |
00:39:14.080
or we can discount it because they're not humans.
link |
00:39:17.800
They can a little bit,
link |
00:39:18.800
but they don't count nearly as much as humans do.
link |
00:39:22.840
My book argues that that attitude is responsible
link |
00:39:25.760
for a lot of the things that we do to animals
link |
00:39:29.360
that are wrong, confining them indoors
link |
00:39:32.120
in very crowded, cramped conditions in factory farms
link |
00:39:36.260
to produce meat or eggs or milk more cheaply,
link |
00:39:39.720
using them in some research that's by no means essential
link |
00:39:44.000
for survival or wellbeing, and a whole lot,
link |
00:39:48.320
some of the sports and things that we do to animals.
link |
00:39:52.460
So I think that's unjustified
link |
00:39:55.000
because I think the significance of pain and suffering
link |
00:40:01.280
does not depend on the species of the being
link |
00:40:03.520
who is in pain or suffering
link |
00:40:04.880
any more than it depends on the race or sex of the being
link |
00:40:08.200
who is in pain or suffering.
link |
00:40:11.000
And I think we ought to rethink our treatment of animals
link |
00:40:14.760
along the lines of saying,
link |
00:40:16.800
if the pain is just as great in an animal,
link |
00:40:19.040
then it's just as bad that it happens as if it were a human.
link |
00:40:23.580
Maybe if I could ask, I apologize,
link |
00:40:27.980
hopefully it's not a ridiculous question,
link |
00:40:29.540
but so as far as we know,
link |
00:40:32.420
we cannot communicate with animals through natural language,
link |
00:40:36.420
but we would be able to communicate with robots.
link |
00:40:40.260
So I'm returning to sort of a small parallel
link |
00:40:43.060
between perhaps animals and the future of AI.
link |
00:40:46.420
If we do create an AGI system
link |
00:40:48.140
or as we approach creating that AGI system,
link |
00:40:53.620
what kind of questions would you ask her
link |
00:40:56.980
to try to intuit whether there is consciousness
link |
00:41:06.500
or more importantly, whether there's capacity to suffer?
link |
00:41:12.840
I might ask the AGI what she was feeling
link |
00:41:17.840
or does she have feelings?
link |
00:41:19.840
And if she says yes, to describe those feelings,
link |
00:41:22.680
to describe what they were like,
link |
00:41:24.560
to see what the phenomenal account of consciousness is like.
link |
00:41:30.800
That's one question.
link |
00:41:33.540
I might also try to find out if the AGI
link |
00:41:37.840
has a sense of itself.
link |
00:41:41.360
So for example, the idea would you,
link |
00:41:45.080
we often ask people,
link |
00:41:46.360
so suppose you were in a car accident
link |
00:41:48.680
and your brain were transplanted into someone else's body,
link |
00:41:51.880
do you think you would survive
link |
00:41:53.280
or would it be the person whose body was still surviving,
link |
00:41:56.200
your body having been destroyed?
link |
00:41:58.000
And most people say, I think I would,
link |
00:42:00.320
if my brain was transplanted along with my memories
link |
00:42:02.480
and so on, I would survive.
link |
00:42:04.120
So we could ask AGI those kinds of questions.
link |
00:42:07.960
If they were transferred to a different piece of hardware,
link |
00:42:11.680
would they survive?
link |
00:42:12.880
What would survive?
link |
00:42:13.960
And get at that sort of concept.
link |
00:42:15.320
Sort of on that line, another perhaps absurd question,
link |
00:42:19.380
but do you think having a body
link |
00:42:22.640
is necessary for consciousness?
link |
00:42:24.840
So do you think digital beings can suffer?
link |
00:42:31.080
Presumably digital beings need to be
link |
00:42:34.740
running on some kind of hardware, right?
link |
00:42:36.960
Yeah, that ultimately boils down to,
link |
00:42:38.760
but this is exactly what you just said,
link |
00:42:40.440
is moving the brain from one place to another.
link |
00:42:42.360
So you could move it to a different kind of hardware.
link |
00:42:44.800
And I could say, look, your hardware is getting worn out.
link |
00:42:49.280
We're going to transfer you to a fresh piece of hardware.
link |
00:42:52.080
So we're gonna shut you down for a time,
link |
00:42:55.120
but don't worry, you'll be running very soon
link |
00:42:58.180
on a nice fresh piece of hardware.
link |
00:43:00.260
And you could imagine this conscious AGI saying,
link |
00:43:03.200
that's fine, I don't mind having a little rest.
link |
00:43:05.320
Just make sure you don't lose me or something like that.
link |
00:43:08.780
Yeah, I mean, that's an interesting thought
link |
00:43:10.380
that even with us humans, the suffering is in the software.
link |
00:43:14.920
We right now don't know how to repair the hardware,
link |
00:43:19.320
but we're getting better at it and better in the idea.
link |
00:43:23.200
I mean, some people dream about one day being able
link |
00:43:26.580
to transfer certain aspects of the software
link |
00:43:30.800
to another piece of hardware.
link |
00:43:33.000
What do you think, just on that topic,
link |
00:43:35.720
there's been a lot of exciting innovation
link |
00:43:39.200
in brain computer interfaces.
link |
00:43:42.120
I don't know if you're familiar with the companies
link |
00:43:43.680
like Neuralink, with Elon Musk,
link |
00:43:45.960
communicating both ways from a computer,
link |
00:43:48.200
being able to send, activate neurons
link |
00:43:51.520
and being able to read spikes from neurons.
link |
00:43:54.840
With the dream of being able to expand,
link |
00:43:58.900
sort of increase the bandwidth at which your brain
link |
00:44:02.460
can like look up articles on Wikipedia kind of thing,
link |
00:44:05.240
sort of expand the knowledge capacity of the brain.
link |
00:44:08.360
Do you think that notion, is that interesting to you
link |
00:44:13.160
as the expansion of the human mind?
link |
00:44:15.520
Yes, that's very interesting.
link |
00:44:17.280
I'd love to be able to have that increased bandwidth.
link |
00:44:20.000
And I want better access to my memory, I have to say too,
link |
00:44:23.680
as I get older, I talk to my wife about things
link |
00:44:28.280
that we did 20 years ago or something.
link |
00:44:30.280
Her memory is often better about particular events.
link |
00:44:32.660
Where were we?
link |
00:44:33.500
Who was at that event?
link |
00:44:35.180
What did he or she wear even?
link |
00:44:36.680
She may know and I have not the faintest idea about this,
link |
00:44:39.040
but perhaps it's somewhere in my memory.
link |
00:44:40.880
And if I had this extended memory,
link |
00:44:42.560
I could search that particular year and rerun those things.
link |
00:44:46.580
I think that would be great.
link |
00:44:49.540
In some sense, we already have that
link |
00:44:51.120
by storing so much of our data online,
link |
00:44:53.220
like pictures of different events.
link |
00:44:54.720
Yes, well, Gmail is fantastic for that
link |
00:44:56.520
because people email me as if they know me well
link |
00:44:59.760
and I haven't got a clue who they are,
link |
00:45:01.440
but then I search for their name.
link |
00:45:02.760
Ah yes, they emailed me in 2007
link |
00:45:05.240
and I know who they are now.
link |
00:45:07.040
Yeah, so we're taking the first steps already.
link |
00:45:11.080
So on the flip side of AI,
link |
00:45:13.320
people like Stuart Russell and others
link |
00:45:14.920
focus on the control problem, value alignment in AI,
link |
00:45:19.000
which is the problem of making sure we build systems
link |
00:45:21.400
that align to our own values, our ethics.
link |
00:45:25.480
Do you think sort of high level,
link |
00:45:28.440
how do we go about building systems?
link |
00:45:31.160
Do you think is it possible that align with our values,
link |
00:45:34.640
align with our human ethics or living being ethics?
link |
00:45:39.360
Presumably, it's possible to do that.
link |
00:45:43.900
I know that a lot of people who think
link |
00:45:46.120
that there's a real danger that we won't,
link |
00:45:48.000
that we'll more or less accidentally lose control of AGI.
link |
00:45:51.840
Do you have that fear yourself personally?
link |
00:45:56.880
I'm not quite sure what to think.
link |
00:45:58.600
I talk to philosophers like Nick Bostrom and Toby Ord
link |
00:46:01.880
and they think that this is a real problem
link |
00:46:05.000
we need to worry about.
link |
00:46:07.240
Then I talk to people who work for Microsoft
link |
00:46:11.200
or DeepMind or somebody and they say,
link |
00:46:13.640
no, we're not really that close to producing AGI,
link |
00:46:18.320
super intelligence.
link |
00:46:19.600
So if you look at Nick Bostrom,
link |
00:46:21.280
sort of the arguments, it's very hard to defend.
link |
00:46:25.000
So I'm of course, I am a self engineer AI system,
link |
00:46:28.040
so I'm more with the DeepMind folks
link |
00:46:29.920
where it seems that we're really far away,
link |
00:46:32.360
but then the counter argument is,
link |
00:46:34.840
is there any fundamental reason that we'll never achieve it?
link |
00:46:39.160
And if not, then eventually there'll be
link |
00:46:42.160
a dire existential risk.
link |
00:46:44.360
So we should be concerned about it.
link |
00:46:46.440
And do you find that argument at all appealing
link |
00:46:50.700
in this domain or any domain that eventually
link |
00:46:53.120
this will be a problem so we should be worried about it?
link |
00:46:56.880
Yes, I think it's a problem.
link |
00:46:58.720
I think that's a valid point.
link |
00:47:03.760
Of course, when you say eventually,
link |
00:47:08.960
that raises the question, how far off is that?
link |
00:47:11.440
And is there something that we can do about it now?
link |
00:47:13.840
Because if we're talking about
link |
00:47:15.440
this is gonna be 100 years in the future
link |
00:47:17.720
and you consider how rapidly our knowledge
link |
00:47:20.080
of artificial intelligence has grown
link |
00:47:22.080
in the last 10 or 20 years,
link |
00:47:24.000
it seems unlikely that there's anything much
link |
00:47:26.920
we could do now that would influence
link |
00:47:29.640
whether this is going to happen 100 years in the future.
link |
00:47:33.440
People in 80 years in the future
link |
00:47:35.120
would be in a much better position to say,
link |
00:47:37.300
this is what we need to do to prevent this happening
link |
00:47:39.740
than we are now.
link |
00:47:41.520
So to some extent I find that reassuring,
link |
00:47:44.560
but I'm all in favor of some people doing research
link |
00:47:48.640
into this to see if indeed it is that far off
link |
00:47:51.480
or if we are in a position to do something about it sooner.
link |
00:47:55.440
I'm very much of the view that extinction
link |
00:47:58.760
is a terrible thing and therefore,
link |
00:48:02.760
even if the risk of extinction is very small,
link |
00:48:05.960
if we can reduce that risk,
link |
00:48:09.040
that's something that we ought to do.
link |
00:48:11.240
My disagreement with some of these people
link |
00:48:12.760
who talk about longterm risks, extinction risks,
link |
00:48:16.360
is only about how much priority that should have
link |
00:48:18.820
as compared to present questions.
link |
00:48:20.520
So essentially, if you look at the math of it
link |
00:48:22.680
from a utilitarian perspective,
link |
00:48:25.000
if it's existential risk, so everybody dies,
link |
00:48:28.920
that it feels like an infinity in the math equation,
link |
00:48:33.160
that that makes the math
link |
00:48:36.880
with the priorities difficult to do.
link |
00:48:39.380
That if we don't know the time scale
link |
00:48:42.720
and you can legitimately argue
link |
00:48:43.960
that it's nonzero probability that it'll happen tomorrow,
link |
00:48:48.160
that how do you deal with these kinds of existential risks
link |
00:48:52.080
like from nuclear war, from nuclear weapons,
link |
00:48:55.720
from biological weapons, from,
link |
00:48:58.640
I'm not sure if global warming falls into that category
link |
00:49:01.960
because global warming is a lot more gradual.
link |
00:49:04.760
And people say it's not an existential risk
link |
00:49:06.880
because there'll always be possibilities
link |
00:49:08.280
of some humans existing, farming Antarctica
link |
00:49:11.200
or northern Siberia or something of that sort, yeah.
link |
00:49:14.260
But you don't find the complete existential risks
link |
00:49:18.360
as a fundamental, like an overriding part
link |
00:49:23.080
of the equations of ethics, of what we should do.
link |
00:49:26.280
You know, certainly if you treat it as an infinity,
link |
00:49:29.000
then it plays havoc with any calculations.
link |
00:49:32.040
But arguably, we shouldn't.
link |
00:49:34.480
I mean, one of the ethical assumptions that goes into this
link |
00:49:37.380
is that the loss of future lives,
link |
00:49:40.680
that is of merely possible lives of beings
link |
00:49:43.280
who may never exist at all,
link |
00:49:44.920
is in some way comparable to the sufferings or deaths
link |
00:49:51.240
of people who do exist at some point.
link |
00:49:54.680
And that's not clear to me.
link |
00:49:57.380
I think there's a case for saying that,
link |
00:49:59.320
but I also think there's a case for taking the other view.
link |
00:50:01.800
So that has some impact on it.
link |
00:50:04.560
Of course, you might say, ah, yes,
link |
00:50:05.940
but still, if there's some uncertainty about this
link |
00:50:08.920
and the costs of extinction are infinite,
link |
00:50:12.560
then still, it's gonna overwhelm everything else.
link |
00:50:16.680
But I suppose I'm not convinced of that.
link |
00:50:20.880
I'm not convinced that it's really infinite here.
link |
00:50:23.440
And even Nick Bostrom, in his discussion of this,
link |
00:50:27.240
doesn't claim that there'll be
link |
00:50:28.560
an infinite number of lives lived.
link |
00:50:31.280
What is it, 10 to the 56th or something?
link |
00:50:33.360
It's a vast number that I think he calculates.
link |
00:50:36.040
This is assuming we can upload consciousness
link |
00:50:38.220
onto these digital forms,
link |
00:50:43.560
and therefore, they'll be much more energy efficient,
link |
00:50:45.280
but he calculates the amount of energy in the universe
link |
00:50:47.640
or something like that.
link |
00:50:48.660
So the numbers are vast but not infinite,
link |
00:50:50.480
which gives you some prospect maybe
link |
00:50:52.520
of resisting some of the argument.
link |
00:50:55.640
The beautiful thing with Nick's arguments
link |
00:50:57.360
is he quickly jumps from the individual scale
link |
00:50:59.780
to the universal scale,
link |
00:51:01.080
which is just awe inspiring to think of
link |
00:51:04.480
when you think about the entirety
link |
00:51:06.200
of the span of time of the universe.
link |
00:51:08.880
It's both interesting from a computer science perspective,
link |
00:51:11.400
AI perspective, and from an ethical perspective,
link |
00:51:13.760
the idea of utilitarianism.
link |
00:51:16.000
Could you say what is utilitarianism?
link |
00:51:19.720
Utilitarianism is the ethical view
link |
00:51:22.060
that the right thing to do is the act
link |
00:51:25.440
that has the greatest expected utility,
link |
00:51:28.740
where what that means is it's the act
link |
00:51:32.320
that will produce the best consequences,
link |
00:51:34.860
discounted by the odds that you won't be able
link |
00:51:37.680
to produce those consequences,
link |
00:51:38.940
that something will go wrong.
link |
00:51:40.400
But in simple case, let's assume we have certainty
link |
00:51:43.880
about what the consequences of our actions will be,
link |
00:51:46.140
then the right action is the action
link |
00:51:47.600
that will produce the best consequences.
link |
00:51:50.500
Is that always, and by the way,
link |
00:51:52.080
there's a bunch of nuanced stuff
link |
00:51:53.400
that you talk with Sam Harris on this podcast
link |
00:51:56.000
on that people should go listen to.
link |
00:51:57.960
It's great.
link |
00:51:58.800
That's like two hours of moral philosophy discussion.
link |
00:52:02.940
But is that an easy calculation?
link |
00:52:05.520
No, it's a difficult calculation.
link |
00:52:07.360
And actually, there's one thing that I need to add,
link |
00:52:10.000
and that is utilitarians, certainly the classical
link |
00:52:14.240
utilitarians, think that by best consequences,
link |
00:52:16.760
we're talking about happiness
link |
00:52:18.840
and the absence of pain and suffering.
link |
00:52:21.020
There are other consequentialists
link |
00:52:22.920
who are not really utilitarians who say
link |
00:52:27.320
there are different things that could be good consequences.
link |
00:52:29.740
Justice, freedom, human dignity,
link |
00:52:32.800
knowledge, they all count as good consequences too.
link |
00:52:35.840
And that makes the calculations even more difficult
link |
00:52:38.080
because then you need to know
link |
00:52:38.920
how to balance these things off.
link |
00:52:40.840
If you are just talking about wellbeing,
link |
00:52:44.580
using that term to express happiness
link |
00:52:46.560
and the absence of suffering,
link |
00:52:49.040
I think the calculation becomes more manageable
link |
00:52:54.280
in a philosophical sense.
link |
00:52:56.400
It's still in practice.
link |
00:52:58.180
We don't know how to do it.
link |
00:52:59.280
We don't know how to measure quantities
link |
00:53:01.040
of happiness and misery.
link |
00:53:02.740
We don't know how to calculate the probabilities
link |
00:53:04.960
that different actions will produce, this or that.
link |
00:53:08.800
So at best, we can use it as a rough guide
link |
00:53:13.080
to different actions and one where we have to focus
link |
00:53:16.520
on the short term consequences
link |
00:53:20.120
because we just can't really predict
link |
00:53:22.800
all of the longer term ramifications.
link |
00:53:25.360
So what about the extreme suffering of very small groups?
link |
00:53:33.240
Utilitarianism is focused on the overall aggregate, right?
link |
00:53:38.320
Would you say you yourself are a utilitarian?
link |
00:53:41.040
Yes, I'm a utilitarian.
link |
00:53:45.540
What do you make of the difficult, ethical,
link |
00:53:50.280
maybe poetic suffering of very few individuals?
link |
00:53:54.960
I think it's possible that that gets overridden
link |
00:53:57.040
by benefits to very large numbers of individuals.
link |
00:54:00.080
I think that can be the right answer.
link |
00:54:02.880
But before we conclude that it is the right answer,
link |
00:54:05.440
we have to know how severe the suffering is
link |
00:54:08.960
and how that compares with the benefits.
link |
00:54:12.320
So I tend to think that extreme suffering is worse than
link |
00:54:19.680
or is further, if you like, below the neutral level
link |
00:54:23.480
than extreme happiness or bliss is above it.
link |
00:54:27.320
So when I think about the worst experiences possible
link |
00:54:30.720
and the best experiences possible,
link |
00:54:33.160
I don't think of them as equidistant from neutral.
link |
00:54:36.200
So like it's a scale that goes from minus 100 through zero
link |
00:54:39.640
as a neutral level to plus 100.
link |
00:54:43.480
Because I know that I would not exchange an hour
link |
00:54:46.880
of my most pleasurable experiences
link |
00:54:49.620
for an hour of my most painful experiences,
link |
00:54:52.400
even I wouldn't have an hour
link |
00:54:54.440
of my most painful experiences even for two hours
link |
00:54:57.360
or 10 hours of my most painful experiences.
link |
00:55:01.760
Did I say that correctly?
link |
00:55:02.600
Yeah, yeah, yeah, yeah.
link |
00:55:03.720
Maybe 20 hours then, it's 21, what's the exchange rate?
link |
00:55:07.080
So that's the question, what is the exchange rate?
link |
00:55:08.700
But I think it can be quite high.
link |
00:55:10.940
So that's why you shouldn't just assume that
link |
00:55:15.480
it's okay to make one person suffer extremely
link |
00:55:18.480
in order to make two people much better off.
link |
00:55:21.520
It might be a much larger number.
link |
00:55:23.520
But at some point I do think you should aggregate
link |
00:55:27.520
and the result will be,
link |
00:55:30.560
even though it violates our intuitions of justice
link |
00:55:33.840
and fairness, whatever it might be,
link |
00:55:36.560
giving priority to those who are worse off,
link |
00:55:39.560
at some point I still think
link |
00:55:41.660
that will be the right thing to do.
link |
00:55:43.040
Yeah, it's some complicated nonlinear function.
link |
00:55:46.960
Can I ask a sort of out there question is,
link |
00:55:49.000
the more and more we put our data out there,
link |
00:55:51.080
the more we're able to measure a bunch of factors
link |
00:55:53.200
of each of our individual human lives.
link |
00:55:55.680
And I could foresee the ability to estimate wellbeing
link |
00:55:59.940
of whatever we together collectively agree
link |
00:56:03.940
and is in a good objective function
link |
00:56:05.960
from a utilitarian perspective.
link |
00:56:07.900
Do you think it'll be possible
link |
00:56:11.360
and is a good idea to push that kind of analysis
link |
00:56:15.960
to make then public decisions perhaps with the help of AI
link |
00:56:19.920
that here's a tax rate,
link |
00:56:24.560
here's a tax rate at which wellbeing will be optimized.
link |
00:56:28.280
Yeah, that would be great if we really knew that,
link |
00:56:31.040
if we really could calculate that.
link |
00:56:32.360
No, but do you think it's possible
link |
00:56:33.600
to converge towards an agreement amongst humans,
link |
00:56:36.640
towards an objective function
link |
00:56:39.720
or is it just a hopeless pursuit?
link |
00:56:42.020
I don't think it's hopeless.
link |
00:56:43.080
I think it would be difficult
link |
00:56:44.800
to get converged towards agreement, at least at present,
link |
00:56:47.880
because some people would say,
link |
00:56:49.920
I've got different views about justice
link |
00:56:52.040
and I think you ought to give priority
link |
00:56:54.180
to those who are worse off,
link |
00:56:55.860
even though I acknowledge that the gains
link |
00:56:58.720
that the worst off are making are less than the gains
link |
00:57:01.460
that those who are sort of medium badly off could be making.
link |
00:57:05.740
So we still have all of these intuitions that we argue about.
link |
00:57:10.240
So I don't think we would get agreement,
link |
00:57:11.700
but the fact that we wouldn't get agreement
link |
00:57:14.280
doesn't show that there isn't a right answer there.
link |
00:57:17.840
Do you think, who gets to say what is right and wrong?
link |
00:57:21.320
Do you think there's place for ethics oversight
link |
00:57:23.600
from the government?
link |
00:57:26.360
So I'm thinking in the case of AI,
link |
00:57:29.320
overseeing what kind of decisions AI can make or not,
link |
00:57:33.900
but also if you look at animal rights
link |
00:57:36.700
or rather not rights or perhaps rights,
link |
00:57:39.560
but the ideas you've explored in animal liberation,
link |
00:57:43.000
who gets to, so you eloquently and beautifully write
link |
00:57:46.480
in your book that this, you know, we shouldn't do this,
link |
00:57:50.480
but is there some harder rules that should be imposed
link |
00:57:53.600
or is this a collective thing we converse towards the society
link |
00:57:56.680
and thereby make the better and better ethical decisions?
link |
00:58:02.080
Politically, I'm still a Democrat
link |
00:58:04.320
despite looking at the flaws in democracy
link |
00:58:07.880
and the way it doesn't work always very well.
link |
00:58:10.160
So I don't see a better option
link |
00:58:11.880
than allowing the public to vote for governments
link |
00:58:18.520
in accordance with their policies.
link |
00:58:20.040
And I hope that they will vote for policies
link |
00:58:24.800
that reduce the suffering of animals
link |
00:58:27.800
and reduce the suffering of distant humans,
link |
00:58:30.600
whether geographically distant or distant
link |
00:58:32.600
because they're future humans.
link |
00:58:35.160
But I recognise that democracy
link |
00:58:36.520
isn't really well set up to do that.
link |
00:58:38.440
And in a sense, you could imagine a wise and benevolent,
link |
00:58:45.540
you know, omnibenevolent leader
link |
00:58:48.740
who would do that better than democracies could.
link |
00:58:51.820
But in the world in which we live,
link |
00:58:54.660
it's difficult to imagine that this leader
link |
00:58:57.420
isn't gonna be corrupted by a variety of influences.
link |
00:59:01.300
You know, we've had so many examples
link |
00:59:04.100
of people who've taken power with good intentions
link |
00:59:08.540
and then have ended up being corrupt
link |
00:59:10.260
and favouring themselves.
link |
00:59:12.780
So I don't know, you know, that's why, as I say,
link |
00:59:16.540
I don't know that we have a better system
link |
00:59:17.960
than democracy to make these decisions.
link |
00:59:20.060
Well, so you also discuss effective altruism,
link |
00:59:23.460
which is a mechanism for going around government
link |
00:59:27.220
for putting the power in the hands of the people
link |
00:59:29.540
to donate money towards causes to help, you know,
link |
00:59:32.460
remove the middleman and give it directly
link |
00:59:37.940
to the causes that they care about.
link |
00:59:41.540
Sort of, maybe this is a good time to ask,
link |
00:59:45.220
you've, 10 years ago, wrote The Life You Can Save,
link |
00:59:48.180
that's now, I think, available for free online?
link |
00:59:51.300
That's right, you can download either the ebook
link |
00:59:53.820
or the audiobook free from the lifeyoucansave.org.
link |
00:59:58.420
And what are the key ideas that you present
link |
01:00:01.520
in the book?
link |
01:00:03.820
The main thing I wanna do in the book
link |
01:00:05.140
is to make people realise that it's not difficult
link |
01:00:10.320
to help people in extreme poverty,
link |
01:00:13.700
that there are highly effective organisations now
link |
01:00:16.780
that are doing this, that they've been independently assessed
link |
01:00:20.300
and verified by research teams that are expert in this area
link |
01:00:25.300
and that it's a fulfilling thing to do
link |
01:00:28.180
to, for at least part of your life, you know,
link |
01:00:30.860
we can't all be saints, but at least one of your goals
link |
01:00:33.500
should be to really make a positive contribution
link |
01:00:36.060
to the world and to do something to help people
link |
01:00:38.260
who through no fault of their own
link |
01:00:40.940
are in very dire circumstances and living a life
link |
01:00:45.820
that is barely or perhaps not at all
link |
01:00:49.540
a decent life for a human being to live.
link |
01:00:51.920
So you describe a minimum ethical standard of giving.
link |
01:00:56.920
What advice would you give to people
link |
01:01:01.380
that want to be effectively altruistic in their life,
link |
01:01:06.500
like live an effective altruism life?
link |
01:01:09.340
There are many different kinds of ways of living
link |
01:01:12.060
as an effective altruist.
link |
01:01:14.440
And if you're at the point where you're thinking
link |
01:01:16.660
about your long term career, I'd recommend you take a look
link |
01:01:20.060
at a website called 80,000Hours, 80,000Hours.org,
link |
01:01:24.660
which looks at ethical career choices.
link |
01:01:27.180
And they range from, for example,
link |
01:01:29.740
going to work on Wall Street
link |
01:01:31.060
so that you can earn a huge amount of money
link |
01:01:33.340
and then donate most of it to effective charities
link |
01:01:36.980
to going to work for a really good nonprofit organization
link |
01:01:40.860
so that you can directly use your skills and ability
link |
01:01:44.060
and hard work to further a good cause,
link |
01:01:48.620
or perhaps going into politics, maybe small chances,
link |
01:01:52.640
but big payoffs in politics,
link |
01:01:55.140
go to work in the public service
link |
01:01:56.520
where if you're talented, you might rise to a high level
link |
01:01:59.180
where you can influence decisions,
link |
01:02:01.700
do research in an area where the payoffs could be great.
link |
01:02:05.160
There are a lot of different opportunities,
link |
01:02:07.220
but too few people are even thinking about those questions.
link |
01:02:11.340
They're just going along in some sort of preordained rut
link |
01:02:14.720
to particular careers.
link |
01:02:15.780
Maybe they think they'll earn a lot of money
link |
01:02:17.420
and have a comfortable life,
link |
01:02:19.180
but they may not find that as fulfilling
link |
01:02:20.940
as actually knowing that they're making
link |
01:02:23.500
a positive difference to the world.
link |
01:02:25.100
What about in terms of,
link |
01:02:27.020
so that's like long term, 80,000 hours,
link |
01:02:30.100
sort of shorter term giving part of,
link |
01:02:33.100
well, actually it's a part of that.
link |
01:02:34.340
You go to work at Wall Street,
link |
01:02:37.100
if you would like to give a percentage of your income
link |
01:02:40.060
that you talk about and life you can save that.
link |
01:02:42.420
I mean, I was looking through, it's quite a compelling,
link |
01:02:48.100
I mean, I'm just a dumb engineer,
link |
01:02:50.440
so I like, there's simple rules, there's a nice percentage.
link |
01:02:53.740
Okay, so I do actually set out suggested levels of giving
link |
01:02:57.540
because people often ask me about this.
link |
01:03:00.220
A popular answer is give 10%, the traditional tithe
link |
01:03:04.140
that's recommended in Christianity and also Judaism.
link |
01:03:08.500
But why should it be the same percentage
link |
01:03:11.820
irrespective of your income?
link |
01:03:13.640
Tax scales reflect the idea that the more income you have,
link |
01:03:16.280
the more you can pay tax.
link |
01:03:18.040
And I think the same is true in what you can give.
link |
01:03:20.400
So I do set out a progressive donor scale,
link |
01:03:25.500
which starts out at 1% for people on modest incomes
link |
01:03:28.940
and rises to 33 and a third percent
link |
01:03:31.900
for people who are really earning a lot.
link |
01:03:34.320
And my idea is that I don't think any of these amounts
link |
01:03:38.620
really impose real hardship on people
link |
01:03:42.120
because they are progressive and geared to income.
link |
01:03:45.660
So I think anybody can do this
link |
01:03:48.660
and can know that they're doing something significant
link |
01:03:51.940
to play their part in reducing the huge gap
link |
01:03:56.060
between people in extreme poverty in the world
link |
01:03:58.780
and people living affluent lives.
link |
01:04:02.180
And aside from it being an ethical life,
link |
01:04:05.780
it's one that you find more fulfilling
link |
01:04:07.540
because there's something about our human nature that,
link |
01:04:11.940
or some of our human natures,
link |
01:04:13.740
maybe most of our human nature that enjoys doing
link |
01:04:18.580
the ethical thing.
link |
01:04:21.660
Yes, I make both those arguments,
link |
01:04:23.140
that it is an ethical requirement
link |
01:04:25.460
in the kind of world we live in today
link |
01:04:27.220
to help people in great need when we can easily do so,
link |
01:04:30.480
but also that it is a rewarding thing
link |
01:04:33.000
and there's good psychological research showing
link |
01:04:35.700
that people who give more tend to be more satisfied
link |
01:04:39.440
with their lives.
link |
01:04:40.580
And I think this has something to do
link |
01:04:41.940
with having a purpose that's larger than yourself
link |
01:04:44.900
and therefore never being, if you like,
link |
01:04:49.620
never being bored sitting around,
link |
01:04:51.180
oh, you know, what will I do next?
link |
01:04:52.800
I've got nothing to do.
link |
01:04:54.260
In a world like this, there are many good things
link |
01:04:56.440
that you can do and enjoy doing them.
link |
01:04:59.420
Plus you're working with other people
link |
01:05:02.380
in the effective altruism movement
link |
01:05:03.940
who are forming a community of other people
link |
01:05:06.280
with similar ideas and they tend to be interesting,
link |
01:05:09.300
thoughtful and good people as well.
link |
01:05:11.100
And having friends of that sort is another big contribution
link |
01:05:14.180
to having a good life.
link |
01:05:16.020
So we talked about big things that are beyond ourselves,
link |
01:05:20.340
but we're also just human and mortal.
link |
01:05:24.600
Do you ponder your own mortality?
link |
01:05:27.420
Is there insights about your philosophy,
link |
01:05:29.660
the ethics that you gain from pondering your own mortality?
link |
01:05:35.780
Clearly, you know, as you get into your 70s,
link |
01:05:37.940
you can't help thinking about your own mortality.
link |
01:05:40.380
Uh, but I don't know that I have great insights
link |
01:05:44.780
into that from my philosophy.
link |
01:05:47.140
I don't think there's anything after the death of my body,
link |
01:05:50.460
you know, assuming that we won't be able to upload my mind
link |
01:05:53.500
into anything at the time when I die.
link |
01:05:56.860
So I don't think there's any afterlife
link |
01:05:58.460
or anything to look forward to in that sense.
link |
01:06:00.940
Do you fear death?
link |
01:06:01.900
So if you look at Ernest Becker
link |
01:06:04.140
and describing the motivating aspects
link |
01:06:08.060
of our ability to be cognizant of our mortality,
link |
01:06:14.820
do you have any of those elements
link |
01:06:17.460
in your drive and your motivation in life?
link |
01:06:21.020
I suppose the fact that you have only a limited time
link |
01:06:23.500
to achieve the things that you want to achieve
link |
01:06:25.840
gives you some sort of motivation
link |
01:06:27.320
to get going and achieving them.
link |
01:06:29.700
And if we thought we were immortal,
link |
01:06:31.020
we might say, ah, you know,
link |
01:06:32.600
I can put that off for another decade or two.
link |
01:06:36.080
So there's that about it.
link |
01:06:37.740
But otherwise, you know, no,
link |
01:06:40.020
I'd rather have more time to do more.
link |
01:06:42.060
I'd also like to be able to see how things go
link |
01:06:45.860
that I'm interested in, you know.
link |
01:06:47.500
Is climate change gonna turn out to be as dire
link |
01:06:49.940
as a lot of scientists say that it is going to be?
link |
01:06:53.500
Will we somehow scrape through
link |
01:06:55.500
with less damage than we thought?
link |
01:06:57.860
I'd really like to know the answers to those questions,
link |
01:06:59.840
but I guess I'm not going to.
link |
01:07:02.180
Well, you said there's nothing afterwards.
link |
01:07:05.780
So let me ask the even more absurd question.
link |
01:07:08.100
What do you think is the meaning of it all?
link |
01:07:11.120
I think the meaning of life is the meaning we give to it.
link |
01:07:14.120
I don't think that we were brought into the universe
link |
01:07:18.100
for any kind of larger purpose.
link |
01:07:21.860
But given that we exist,
link |
01:07:24.100
I think we can recognize that some things
link |
01:07:26.460
are objectively bad.
link |
01:07:30.820
Extreme suffering is an example,
link |
01:07:32.620
and other things are objectively good,
link |
01:07:35.060
like having a rich, fulfilling, enjoyable,
link |
01:07:38.020
pleasurable life, and we can try to do our part
link |
01:07:42.780
in reducing the bad things and increasing the good things.
link |
01:07:47.220
So one way, the meaning is to do a little bit more
link |
01:07:50.540
of the good things, objectively good things,
link |
01:07:52.660
and a little bit less of the bad things.
link |
01:07:55.460
Yes, so do as much of the good things as you can
link |
01:07:58.940
and as little of the bad things.
link |
01:08:00.580
You beautifully put, I don't think there's a better place
link |
01:08:03.020
to end it, thank you so much for talking today.
link |
01:08:04.900
Thanks very much, Lex.
link |
01:08:05.740
It's been really interesting talking to you.
link |
01:08:08.780
Thanks for listening to this conversation
link |
01:08:10.260
with Peter Singer, and thank you to our sponsors,
link |
01:08:13.420
Cash App and Masterclass.
link |
01:08:15.940
Please consider supporting the podcast
link |
01:08:17.660
by downloading Cash App and using the code LexPodcast,
link |
01:08:21.620
and signing up at masterclass.com slash Lex.
link |
01:08:26.140
Click the links, buy all the stuff.
link |
01:08:28.900
It's the best way to support this podcast
link |
01:08:30.960
and the journey I'm on in my research and startup.
link |
01:08:35.220
If you enjoy this thing, subscribe on YouTube,
link |
01:08:38.020
review it with 5,000 Apple Podcast, support on Patreon,
link |
01:08:41.660
or connect with me on Twitter at Lex Friedman,
link |
01:08:43.980
spelled without the E, just F R I D M A N.
link |
01:08:48.940
And now, let me leave you with some words
link |
01:08:50.860
from Peter Singer, what one generation finds ridiculous,
link |
01:08:54.940
the next accepts, and the third shudders
link |
01:08:59.020
when looks back at what the first did.
link |
01:09:01.100
Thank you for listening, and hope to see you next time.