back to indexPeter Singer: Suffering in Humans, Animals, and AI | Lex Fridman Podcast #107
link |
The following is a conversation with Peter Singer,
link |
professor of bioethics at Princeton University,
link |
best known for his 1975 book, Animal Liberation,
link |
that makes an ethical case against eating meat.
link |
He has written brilliantly from an ethical perspective
link |
on extreme poverty, euthanasia, human genetic selection,
link |
sports doping, the sale of kidneys,
link |
and generally happiness, including in his books,
link |
Ethics in the Real World, and The Life You Can Save.
link |
He was a key popularizer of the effective altruism movement
link |
and is generally considered one of the most influential
link |
philosophers in the world.
link |
Quick summary of the ads.
link |
Two sponsors, Cash App and Masterclass.
link |
Please consider supporting the podcast
link |
by downloading Cash App and using code LexPodcast
link |
and signing up at masterclass.com slash Lex.
link |
Click the links, buy the stuff.
link |
It really is the best way to support the podcast
link |
and the journey I'm on.
link |
As you may know, I primarily eat a ketogenic or carnivore diet,
link |
which means that most of my diet is made up of meat.
link |
I do not hunt the food I eat, though one day I hope to.
link |
I love fishing, for example.
link |
Fishing and eating the fish I catch
link |
has always felt much more honest than participating
link |
in the supply chain of factory farming.
link |
From an ethics perspective, this part of my life
link |
has always had a cloud over it.
link |
It makes me think.
link |
I've tried a few times in my life
link |
to reduce the amount of meat I eat.
link |
But for some reason, whatever the makeup of my body,
link |
whatever the way I practice the dieting I have,
link |
I get a lot of mental and physical energy
link |
and performance from eating meat.
link |
So both intellectually and physically,
link |
it's a continued journey for me.
link |
I return to Peter's work often to reevaluate the ethics
link |
of how I live this aspect of my life.
link |
Let me also say that you may be a vegan
link |
or you may be a meat eater and may be upset by the words I say
link |
or Peter says, but I ask for this podcast
link |
and other episodes of this podcast
link |
that you keep an open mind.
link |
I may and probably will talk with people you disagree with.
link |
Please try to really listen, especially
link |
to people you disagree with.
link |
And give me and the world the gift
link |
of being a participant in a patient, intelligent,
link |
and nuanced discourse.
link |
If your instinct and desire is to be a voice of mockery
link |
towards those you disagree with, please unsubscribe.
link |
My source of joy and inspiration here
link |
has been to be a part of a community that thinks deeply
link |
and speaks with empathy and compassion.
link |
That is what I hope to continue being a part of
link |
and I hope you join as well.
link |
If you enjoy this podcast, subscribe on YouTube,
link |
review it with five stars on Apple Podcast,
link |
follow on Spotify, support on Patreon,
link |
or connect with me on Twitter at Lex Friedman.
link |
As usual, I'll do a few minutes of ads now
link |
and never any ads in the middle
link |
that can break the flow of the conversation.
link |
This show is presented by Cash App,
link |
the number one finance app in the App Store.
link |
When you get it, use code LEXPODCAST.
link |
Cash App lets you send money to friends,
link |
buy Bitcoin, and invest in the stock market
link |
with as little as one dollar.
link |
Since Cash App allows you to buy Bitcoin,
link |
let me mention that cryptocurrency in the context
link |
of the history of money is fascinating.
link |
I recommend Ascent of Money
link |
as a great book on this history.
link |
Debits and credits on ledgers
link |
started around 30,000 years ago.
link |
The US dollar created over 200 years ago
link |
and the first decentralized cryptocurrency
link |
released just over 10 years ago.
link |
So given that history, cryptocurrency is still very much
link |
in its early days of development,
link |
but it's still aiming to and just might
link |
redefine the nature of money.
link |
So again, if you get Cash App from the App Store
link |
or Google Play and use the code LEXPODCAST,
link |
you get $10 and Cash App will also donate $10 to FIRST,
link |
an organization that is helping to advance
link |
robotic system education for young people around the world.
link |
This show is sponsored by Masterclass.
link |
Sign up at masterclass.com slash LEX
link |
to get a discount and to support this podcast.
link |
When I first heard about Masterclass,
link |
I thought it was too good to be true.
link |
For $180 a year, you get an all access pass
link |
to watch courses from, to list some of my favorites,
link |
Chris Hadfield on space exploration,
link |
Neil Gauss Tyson on scientific thinking and communication,
link |
Will Wright, creator of SimCity and Sims on game design.
link |
I promise I'll start streaming games at some point soon.
link |
Carlos Santana on guitar, Gary Kasparov on chess,
link |
Daniel Lagrano on poker and many more.
link |
Chris Hadfield explaining how rockets work
link |
and the experience of being launched into space alone
link |
is worth the money.
link |
By the way, you can watch it on basically any device.
link |
Once again, sign up at masterclass.com slash LEX
link |
to get a discount and to support this podcast.
link |
And now, here's my conversation with Peter Singer.
link |
When did you first become conscious of the fact
link |
that there is much suffering in the world?
link |
I think I was conscious of the fact
link |
that there's a lot of suffering in the world
link |
pretty much as soon as I was able to understand
link |
anything about my family and its background
link |
because I lost three of my four grandparents
link |
in the Holocaust and obviously I knew
link |
why I only had one grandparent
link |
and she herself had been in the camps and survived,
link |
so I think I knew a lot about that pretty early.
link |
My entire family comes from the Soviet Union.
link |
I was born in the Soviet Union.
link |
World War II has deep roots in the culture
link |
and the suffering that the war brought
link |
the millions of people who died is in the music,
link |
is in the literature, is in the culture.
link |
What do you think was the impact
link |
of the war broadly on our society?
link |
The war had many impacts.
link |
I think one of them, a beneficial impact,
link |
is that it showed what racism
link |
and authoritarian government can do
link |
and at least as far as the West was concerned,
link |
I think that meant that I grew up in an era
link |
in which there wasn't the kind of overt racism
link |
and antisemitism that had existed for my parents in Europe.
link |
I was growing up in Australia
link |
and certainly that was clearly seen
link |
as something completely unacceptable.
link |
There was also, though, a fear of a further outbreak of war
link |
which this time we expected would be nuclear
link |
because of the way the Second World War had ended,
link |
so there was this overshadowing of my childhood
link |
about the possibility that I would not live to grow up
link |
and be an adult because of a catastrophic nuclear war.
link |
The film On the Beach was made
link |
in which the city that I was living,
link |
Melbourne, was the last place on Earth
link |
to have living human beings
link |
because of the nuclear cloud
link |
that was spreading from the North,
link |
so that certainly gave us a bit of that sense.
link |
There were many, there were clearly many other legacies
link |
that we got of the war as well
link |
and the whole setup of the world
link |
and the Cold War that followed.
link |
All of that has its roots in the Second World War.
link |
There is much beauty that comes from war.
link |
Sort of, I had a conversation with Eric Weinstein.
link |
He said everything is great about war
link |
except all the death and suffering.
link |
Do you think there's something positive
link |
that came from the war,
link |
the mirror that it put to our society,
link |
sort of the ripple effects on it, ethically speaking?
link |
Do you think there are positive aspects to war?
link |
I find it hard to see positive aspects in war
link |
and some of the things that other people think of
link |
as positive and beautiful may be questioning.
link |
So there's a certain kind of patriotism.
link |
People say during wartime, we all pull together,
link |
we all work together against a common enemy
link |
An outside enemy does unite a country
link |
and in general, it's good for countries to be united
link |
and have common purposes
link |
but it also engenders a kind of a nationalism
link |
and a patriotism that can't be questioned
link |
and that I'm more skeptical about.
link |
What about the brotherhood
link |
that people talk about from soldiers?
link |
The sort of counterintuitive, sad idea
link |
that the closest that people feel to each other
link |
is in those moments of suffering,
link |
of being at the sort of the edge
link |
of seeing your comrades dying in your arms.
link |
That somehow brings people extremely closely together.
link |
Suffering brings people closer together.
link |
How do you make sense of that?
link |
It may bring people close together
link |
but there are other ways of bonding
link |
and being close to people I think
link |
without the suffering and death that war entails.
link |
Perhaps you could see, you could already hear
link |
the romanticized Russian in me.
link |
We tend to romanticize suffering just a little bit
link |
in our literature and culture and so on.
link |
Could you take a step back
link |
and I apologize if it's a ridiculous question
link |
but what is suffering?
link |
If you would try to define what suffering is,
link |
how would you go about it?
link |
Suffering is a conscious state.
link |
There can be no suffering for a being
link |
who is completely unconscious
link |
and it's distinguished from other conscious states
link |
in terms of being one that considered just in itself.
link |
We would rather be without.
link |
It's a conscious state that we want to stop
link |
if we're experiencing or we want to avoid having again
link |
if we've experienced it in the past.
link |
And that's, as I say, emphasized for its own sake
link |
because of course people will say,
link |
well, suffering strengthens the spirit.
link |
It has good consequences.
link |
And sometimes it does have those consequences
link |
and of course sometimes we might undergo suffering.
link |
We set ourselves a challenge to run a marathon
link |
or climb a mountain or even just to go to the dentist
link |
so that the toothache doesn't get worse
link |
even though we know the dentist is gonna hurt us
link |
So I'm not saying that we never choose suffering
link |
but I am saying that other things being equal,
link |
we would rather not be in that state of consciousness.
link |
Is the ultimate goal sort of,
link |
you have the new 10 year anniversary release
link |
of the Life You Can Save book, really influential book.
link |
We'll talk about it a bunch of times
link |
throughout this conversation
link |
but do you think it's possible
link |
to eradicate suffering or is that the goal
link |
or do we want to achieve a kind of minimum threshold
link |
of suffering and then keeping a little drop of poison
link |
to keep things interesting in the world?
link |
In practice, I don't think we ever will eliminate suffering
link |
so I think that little drop of poison as you put it
link |
or if you like the contrasting dash of an unpleasant color
link |
perhaps something like that
link |
in a otherwise harmonious and beautiful composition,
link |
that is gonna always be there.
link |
If you ask me whether in theory
link |
if we could get rid of it, we should.
link |
I think the answer is whether in fact
link |
we would be better off
link |
or whether in terms of by eliminating the suffering
link |
we would also eliminate some of the highs,
link |
the positive highs and if that's so
link |
then we might be prepared to say
link |
it's worth having a minimum of suffering
link |
in order to have the best possible experiences as well.
link |
Is there a relative aspect to suffering?
link |
So when you talk about eradicating poverty in the world,
link |
is this the more you succeed,
link |
the more the bar of what defines poverty raises
link |
or is there at the basic human ethical level
link |
a bar that's absolute that once you get above it
link |
then we can morally converge
link |
to feeling like we have eradicated poverty?
link |
I think they're both and I think this is true for poverty
link |
as well as suffering.
link |
There's an objective level of suffering or of poverty
link |
where we're talking about objective indicators
link |
like you're constantly hungry,
link |
you can't get enough food,
link |
you're constantly cold, you can't get warm,
link |
you have some physical pains that you're never rid of.
link |
I think those things are objective
link |
but it may also be true that if you do get rid of it
link |
if you do get rid of that and you get to the stage
link |
where all of those basic needs have been met,
link |
there may still be then new forms of suffering that develop
link |
and perhaps that's what we're seeing
link |
in the affluent societies we have
link |
that people get bored for example,
link |
they don't need to spend so many hours a day earning money
link |
to get enough to eat and shelter.
link |
So now they're bored, they lack a sense of purpose.
link |
And that then is a kind of a relative suffering
link |
that is distinct from the objective forms of suffering.
link |
But in your focus on eradicating suffering,
link |
you don't think about that kind of,
link |
the kind of interesting challenges and suffering
link |
that emerges in affluent societies,
link |
that's just not, in your ethical philosophical brain,
link |
is that of interest at all?
link |
It would be of interest to me if we had eliminated
link |
all of the objective forms of suffering,
link |
which I think of as generally more severe
link |
and also perhaps easier at this stage anyway
link |
to know how to eliminate.
link |
So yes, in some future state when we've eliminated
link |
those objective forms of suffering,
link |
I would be interested in trying to eliminate
link |
the relative forms as well.
link |
But that's not a practical need for me at the moment.
link |
Sorry to linger on it because you kind of said it,
link |
but just is elimination the goal for the affluent society?
link |
So is there, do you see suffering as a creative force?
link |
Suffering can be a creative force.
link |
I think repeating what I said about the highs
link |
and whether we need some of the lows
link |
to experience the highs.
link |
So it may be that suffering makes us more creative
link |
and we regard that as worthwhile.
link |
Maybe that brings some of those highs with it
link |
that we would not have had if we'd had no suffering.
link |
I don't really know.
link |
Many people have suggested that
link |
and I certainly can't have no basis for denying it.
link |
And if it's true, then I would not want
link |
to eliminate suffering completely.
link |
But the focus is on the absolute,
link |
not to be cold, not to be hungry.
link |
Yes, that's at the present stage
link |
of where the world's population is, that's the focus.
link |
Talking about human nature for a second,
link |
do you think people are inherently good
link |
or do we all have good and evil in us
link |
that basically everyone is capable of evil
link |
based on the environment?
link |
Certainly most of us have potential for both good and evil.
link |
I'm not prepared to say that everyone is capable of evil.
link |
Maybe some people who even in the worst of circumstances
link |
would not be capable of it,
link |
but most of us are very susceptible
link |
to environmental influences.
link |
So when we look at things
link |
that we were talking about previously,
link |
let's say what the Nazis did during the Holocaust,
link |
I think it's quite difficult to say,
link |
I know that I would not have done those things
link |
even if I were in the same circumstances
link |
as those who did them.
link |
Even if let's say I had grown up under the Nazi regime
link |
and had been indoctrinated with racist ideas,
link |
had also had the idea that I must obey orders,
link |
follow the commands of the Fuhrer,
link |
plus of course perhaps the threat
link |
that if I didn't do certain things,
link |
I might get sent to the Russian front
link |
and that would be a pretty grim fate.
link |
I think it's really hard for anybody to say,
link |
nevertheless, I know I would not have killed those Jews
link |
or whatever else it was that they were.
link |
Well, what's your intuition?
link |
How many people will be able to say that?
link |
Truly to be able to say it,
link |
I think very few, less than 10%.
link |
To me, it seems a very interesting
link |
and powerful thing to meditate on.
link |
So I've read a lot about the war, World War II,
link |
and I can't escape the thought
link |
that I would have not been one of the 10%.
link |
Right, I have to say, I simply don't know.
link |
I would like to hope that I would have been one of the 10%,
link |
but I don't really have any basis
link |
for claiming that I would have been different
link |
from the majority.
link |
Is it a worthwhile thing to contemplate?
link |
It would be interesting if we could find a way
link |
of really finding these answers.
link |
There obviously is quite a bit of research
link |
on people during the Holocaust,
link |
on how ordinary Germans got led to do terrible things,
link |
and there are also studies of the resistance,
link |
some heroic people in the White Rose group, for example,
link |
who resisted even though they knew
link |
they were likely to die for it.
link |
But I don't know whether these studies
link |
really can answer your larger question
link |
of how many people would have been capable of doing that.
link |
Well, sort of the reason I think is interesting
link |
is in the world, as you described,
link |
when there are things that you'd like to do that are good,
link |
that are objectively good,
link |
it's useful to think about whether
link |
I'm not willing to do something,
link |
or I'm not willing to acknowledge something
link |
as good and the right thing to do
link |
because I'm simply scared of putting my life,
link |
of damaging my life in some kind of way.
link |
And that kind of thought exercise is helpful
link |
to understand what is the right thing
link |
in my current skill set and the capacity to do.
link |
Sort of there's things that are convenient,
link |
and I wonder if there are things
link |
that are highly inconvenient,
link |
where I would have to experience derision,
link |
or hatred, or death, or all those kinds of things,
link |
but it's truly the right thing to do.
link |
And that kind of balance is,
link |
I feel like in America, we don't have,
link |
it's difficult to think in the current times,
link |
it seems easier to put yourself back in history,
link |
where you can sort of objectively contemplate
link |
whether, how willing you are to do the right thing
link |
when the cost is high.
link |
True, but I think we do face those challenges today,
link |
and I think we can still ask ourselves those questions.
link |
So one stand that I took more than 40 years ago now
link |
was to stop eating meat, become a vegetarian at a time
link |
when you hardly met anybody who was a vegetarian,
link |
or if you did, they might've been a Hindu,
link |
or they might've had some weird theories
link |
about meat and health.
link |
And I know thinking about making that decision,
link |
I was convinced that it was the right thing to do,
link |
but I still did have to think,
link |
are all my friends gonna think that I'm a crank
link |
because I'm now refusing to eat meat?
link |
So I'm not saying there were any terrible sanctions,
link |
obviously, but I thought about that,
link |
and I guess I decided,
link |
well, I still think this is the right thing to do,
link |
and I'll put up with that if it happens.
link |
And one or two friends were clearly uncomfortable
link |
with that decision, but that was pretty minor
link |
compared to the historical examples
link |
that we've been talking about.
link |
But other issues that we have around too,
link |
like global poverty and what we ought to be doing about that
link |
is another question where people, I think,
link |
can have the opportunity to take a stand
link |
on what's the right thing to do now.
link |
Climate change would be a third question
link |
where, again, people are taking a stand.
link |
I can look at Greta Thunberg there and say,
link |
well, I think it must've taken a lot of courage
link |
for a schoolgirl to say,
link |
I'm gonna go on strike about climate change
link |
and see what happens.
link |
Yeah, especially in this divisive world,
link |
she gets exceptionally huge amounts of support
link |
Which is very difficult for a teenager to operate in.
link |
In your book, Ethics in the Real World,
link |
amazing book, people should check it out.
link |
82 brief essays on things that matter.
link |
One of the essays asks, should robots have rights?
link |
You've written about this,
link |
so let me ask, should robots have rights?
link |
If we ever develop robots capable of consciousness,
link |
capable of having their own internal perspective
link |
on what's happening to them
link |
so that their lives can go well or badly for them,
link |
then robots should have rights.
link |
Until that happens, they shouldn't.
link |
So is consciousness essentially a prerequisite to suffering?
link |
So everything that possesses consciousness
link |
is capable of suffering, put another way.
link |
And if so, what is consciousness?
link |
I certainly think that consciousness
link |
is a prerequisite for suffering.
link |
You can't suffer if you're not conscious.
link |
But is it true that every being that is conscious
link |
will suffer or has to be capable of suffering?
link |
I suppose you could imagine a kind of consciousness,
link |
especially if we can construct it artificially,
link |
that's capable of experiencing pleasure
link |
but just automatically cuts out the consciousness
link |
when they're suffering.
link |
So they're like an instant anesthesia
link |
as soon as something is gonna cause you suffering.
link |
So that's possible.
link |
But doesn't exist as far as we know on this planet yet.
link |
You asked what is consciousness.
link |
Philosophers often talk about it
link |
as there being a subject of experiences.
link |
So you and I and everybody listening to this
link |
is a subject of experience.
link |
There is a conscious subject who is taking things in,
link |
responding to it in various ways,
link |
feeling good about it, feeling bad about it.
link |
And that's different from the kinds
link |
of artificial intelligence we have now.
link |
I take out my phone.
link |
I ask Google directions to where I'm going.
link |
Google gives me the directions
link |
and I choose to take a different way.
link |
Google doesn't care.
link |
It's not like I'm offending Google or anything like that.
link |
There is no subject of experiences there.
link |
And I think that's the indication
link |
that Google AI we have now is not conscious
link |
or at least that level of AI is not conscious.
link |
And that's the way to think about it.
link |
Now, it may be difficult to tell, of course,
link |
whether a certain AI is or isn't conscious.
link |
It may mimic consciousness
link |
and we can't tell if it's only mimicking it
link |
or if it's the real thing.
link |
But that's what we're looking for.
link |
Is there a subject of experience,
link |
a perspective on the world from which things can go well
link |
or badly from that perspective?
link |
So our idea of what suffering looks like
link |
comes from just watching ourselves when we're in pain.
link |
Or when we're experiencing pleasure, it's not only.
link |
Pleasure and pain.
link |
Yes, so and then you could actually,
link |
you could push back on us, but I would say
link |
that's how we kind of build an intuition about animals
link |
is we can infer the similarities between humans and animals
link |
and so infer that they're suffering or not
link |
based on certain things and they're conscious or not.
link |
So what if robots, you mentioned Google Maps
link |
and I've done this experiment.
link |
So I work in robotics just for my own self
link |
or I have several Roomba robots
link |
and I play with different speech interaction,
link |
voice based interaction.
link |
And if the Roomba or the robot or Google Maps
link |
shows any signs of pain, like screaming or moaning
link |
or being displeased by something you've done,
link |
that in my mind, I can't help but immediately upgrade it.
link |
And even when I myself programmed it in,
link |
just having another entity that's now for the moment
link |
disjoint from me showing signs of pain
link |
makes me feel like it is conscious.
link |
Like I immediately, then the whatever,
link |
I immediately realize that it's not obviously,
link |
but that feeling is there.
link |
So sort of, I guess, what do you think about a world
link |
where Google Maps and Roombas are pretending to be conscious
link |
and we descendants of apes are not smart enough
link |
to realize they're not or whatever, or that is conscious,
link |
they appear to be conscious.
link |
And so you then have to give them rights.
link |
The reason I'm asking that is that kind of capability
link |
may be closer than we realize.
link |
Yes, that kind of capability may be closer,
link |
but I don't think it follows
link |
that we have to give them rights.
link |
I suppose the argument for saying that in those circumstances
link |
we should give them rights is that if we don't,
link |
we'll harden ourselves against other beings
link |
who are not robots and who really do suffer.
link |
That's a possibility that, you know,
link |
if we get used to looking at a being suffering
link |
and saying, yeah, we don't have to do anything about that,
link |
that being doesn't have any rights,
link |
maybe we'll feel the same about animals, for instance.
link |
And interestingly, among philosophers and thinkers
link |
who denied that we have any direct duties to animals,
link |
and this includes people like Thomas Aquinas
link |
and Immanuel Kant, they did say, yes,
link |
but still it's better not to be cruel to them,
link |
not because of the suffering we're inflicting
link |
on the animals, but because if we are,
link |
we may develop a cruel disposition
link |
and this will be bad for humans, you know,
link |
because we're more likely to be cruel to other humans
link |
and that would be wrong.
link |
But you don't accept that kind of.
link |
I don't accept that as the basis of the argument
link |
for why we shouldn't be cruel to animals.
link |
I think the basis of the argument
link |
for why we shouldn't be cruel to animals
link |
is just that we're inflicting suffering on them
link |
and the suffering is a bad thing.
link |
But possibly I might accept some sort of parallel
link |
of that argument as a reason why you shouldn't be cruel
link |
to these robots that mimic the symptoms of pain
link |
if it's gonna be harder for us to distinguish.
link |
I would venture to say, I'd like to disagree with you
link |
and with most people, I think,
link |
at the risk of sounding crazy,
link |
I would like to say that if that Roomba is dedicated
link |
to faking the consciousness and the suffering,
link |
I think it will be impossible for us.
link |
I would like to apply the same argument
link |
as with animals to robots,
link |
that they deserve rights in that sense.
link |
Now we might outlaw the addition
link |
of those kinds of features into Roombas,
link |
but once you do, I think I'm quite surprised
link |
by the upgrade in consciousness
link |
that the display of suffering creates.
link |
It's a totally open world,
link |
but I'd like to just sort of the difference
link |
between animals and other humans is that in the robot case,
link |
we've added it in ourselves.
link |
Therefore, we can say something about how real it is.
link |
But I would like to say that the display of it
link |
is what makes it real.
link |
And I'm not a philosopher, I'm not making that argument,
link |
but I'd at least like to add that as a possibility.
link |
And I've been surprised by it
link |
is all I'm trying to sort of articulate poorly, I suppose.
link |
So there is a philosophical view
link |
has been held about humans,
link |
which is rather like what you're talking about,
link |
and that's behaviorism.
link |
So behaviorism was employed both in psychology,
link |
people like BF Skinner was a famous behaviorist,
link |
but in psychology, it was more a kind of a,
link |
what is it that makes this science?
link |
Well, you need to have behavior
link |
because that's what you can observe,
link |
you can't observe consciousness.
link |
But in philosophy, the view just defended
link |
by people like Gilbert Ryle,
link |
who was a professor of philosophy at Oxford,
link |
wrote a book called The Concept of Mind,
link |
in which in this kind of phase,
link |
this is in the 40s of linguistic philosophy,
link |
he said, well, the meaning of a term is its use,
link |
and we use terms like so and so is in pain
link |
when we see somebody writhing or screaming
link |
or trying to escape some stimulus,
link |
and that's the meaning of the term.
link |
So that's what it is to be in pain,
link |
and you point to the behavior.
link |
And Norman Malcolm, who was another philosopher
link |
in the school from Cornell, had the view that,
link |
so what is it to dream?
link |
After all, we can't see other people's dreams.
link |
Well, when people wake up and say,
link |
I've just had a dream of, here I was,
link |
undressed, walking down the main street
link |
or whatever it is you've dreamt,
link |
that's what it is to have a dream.
link |
It's basically to wake up and recall something.
link |
So you could apply this to what you're talking about
link |
and say, so what it is to be in pain
link |
is to exhibit these symptoms of pain behavior,
link |
and therefore, these robots are in pain.
link |
That's what the word means.
link |
But nowadays, not many people think
link |
that Ryle's kind of philosophical behaviorism
link |
is really very plausible,
link |
so I think they would say the same about your view.
link |
So, yes, I just spoke with Noam Chomsky,
link |
who basically was part of dismantling
link |
the behaviorist movement.
link |
But, and I'm with that 100% for studying human behavior,
link |
but I am one of the few people in the world
link |
who has made Roombas scream in pain.
link |
And I just don't know what to do
link |
with that empirical evidence,
link |
because it's hard, sort of philosophically, I agree.
link |
But the only reason I philosophically agree in that case
link |
is because I was the programmer.
link |
But if somebody else was a programmer,
link |
I'm not sure I would be able to interpret that well.
link |
So I think it's a new world
link |
that I was just curious what your thoughts are.
link |
For now, you feel that the display
link |
of what we can kind of intellectually say
link |
is a fake display of suffering is not suffering.
link |
That's right, that would be my view.
link |
But that's consistent, of course,
link |
with the idea that it's part of our nature
link |
to respond to this display
link |
if it's reasonably authentically done.
link |
And therefore it's understandable
link |
that people would feel this,
link |
and maybe, as I said, it's even a good thing
link |
that they do feel it,
link |
and you wouldn't want to harden yourself against it
link |
because then you might harden yourself
link |
against being sort of really suffering.
link |
But there's this line, so you said,
link |
once artificial general intelligence system,
link |
a human level intelligence system become conscious,
link |
I guess if I could just linger on it,
link |
now I've wrote really dumb programs
link |
that just say things that I told them to say,
link |
but how do you know when a system like Alexa,
link |
which is sufficiently complex
link |
that you can't introspect to how it works,
link |
starts giving you signs of consciousness
link |
through natural language?
link |
That there's a feeling,
link |
there's another entity there that's self aware,
link |
that has a fear of death, a mortality,
link |
that has awareness of itself
link |
that we kind of associate with other living creatures.
link |
I guess I'm sort of trying to do the slippery slope
link |
from the very naive thing where I started
link |
into something where it's sufficiently a black box
link |
to where it's starting to feel like it's conscious.
link |
Where's that threshold
link |
where you would start getting uncomfortable
link |
with the idea of robot suffering, do you think?
link |
I don't know enough about the programming
link |
that we're going to this really to answer this question.
link |
But I presume that somebody who does know more about this
link |
could look at the program
link |
and see whether we can explain the behaviors
link |
in a parsimonious way that doesn't require us
link |
to suggest that some sort of consciousness has emerged.
link |
Or alternatively, whether you're in a situation
link |
where you say, I don't know how this is happening,
link |
the program does generate a kind of artificial
link |
general intelligence which is autonomous,
link |
starts to do things itself and is autonomous
link |
of the basics programming that set it up.
link |
And so it's quite possible that actually
link |
we have achieved consciousness
link |
in a system of artificial intelligence.
link |
Sort of the approach that I work with,
link |
most of the community is really excited about now
link |
is with learning methods, so machine learning.
link |
And the learning methods are unfortunately
link |
are not capable of revealing,
link |
which is why somebody like Noam Chomsky criticizes them.
link |
You create powerful systems that are able
link |
to do certain things without understanding
link |
the theory, the physics, the science of how it works.
link |
And so it's possible if those are the kinds
link |
of methods that succeed, we won't be able
link |
to know exactly, sort of try to reduce,
link |
try to find whether this thing is conscious or not,
link |
this thing is intelligent or not.
link |
It's simply giving, when we talk to it,
link |
it displays wit and humor and cleverness
link |
and emotion and fear, and then we won't be able
link |
to say where in the billions of nodes,
link |
neurons in this artificial neural network
link |
is the fear coming from.
link |
So in that case, that's a really interesting place
link |
where we do now start to return to behaviorism and say.
link |
Yeah, that is an interesting issue.
link |
I would say that if we have serious doubts
link |
and think it might be conscious,
link |
then we ought to try to give it the benefit
link |
of the doubt, just as I would say with animals.
link |
I think we can be highly confident
link |
that vertebrates are conscious,
link |
but when we get down, and some invertebrates
link |
like the octopus, but with insects,
link |
it's much harder to be confident of that.
link |
I think we should give them the benefit
link |
of the doubt where we can, which means,
link |
I think it would be wrong to torture an insect,
link |
but it doesn't necessarily mean it's wrong
link |
to slap a mosquito that's about to bite you
link |
and stop you getting to sleep.
link |
So I think you try to achieve some balance
link |
in these circumstances of uncertainty.
link |
If it's okay with you, if we can go back just briefly.
link |
So 44 years ago, like you mentioned, 40 plus years ago,
link |
you've written Animal Liberation,
link |
the classic book that started,
link |
that launched, that was the foundation
link |
of the movement of Animal Liberation.
link |
Can you summarize the key set of ideas
link |
that underpin that book?
link |
Certainly, the key idea that underlies that book
link |
is the concept of speciesism,
link |
which I did not invent that term.
link |
I took it from a man called Richard Rider,
link |
who was in Oxford when I was,
link |
and I saw a pamphlet that he'd written
link |
about experiments on chimpanzees that used that term.
link |
But I think I contributed
link |
to making it philosophically more precise
link |
and to getting it into a broader audience.
link |
And the idea is that we have a bias or a prejudice
link |
against taking seriously the interests of beings
link |
who are not members of our species.
link |
Just as in the past, Europeans, for example,
link |
had a bias against taking seriously
link |
the interests of Africans, racism.
link |
And men have had a bias against taking seriously
link |
the interests of women, sexism.
link |
So I think something analogous, not completely identical,
link |
but something analogous goes on
link |
and has gone on for a very long time
link |
with the way humans see themselves vis a vis animals.
link |
We see ourselves as more important.
link |
We see animals as existing to serve our needs
link |
And you're gonna find this very explicit
link |
in earlier philosophers from Aristotle
link |
through to Kant and others.
link |
And either we don't need to take their interests
link |
into account at all,
link |
or we can discount it because they're not humans.
link |
They can a little bit,
link |
but they don't count nearly as much as humans do.
link |
My book argues that that attitude is responsible
link |
for a lot of the things that we do to animals
link |
that are wrong, confining them indoors
link |
in very crowded, cramped conditions in factory farms
link |
to produce meat or eggs or milk more cheaply,
link |
using them in some research that's by no means essential
link |
for survival or wellbeing, and a whole lot,
link |
some of the sports and things that we do to animals.
link |
So I think that's unjustified
link |
because I think the significance of pain and suffering
link |
does not depend on the species of the being
link |
who is in pain or suffering
link |
any more than it depends on the race or sex of the being
link |
who is in pain or suffering.
link |
And I think we ought to rethink our treatment of animals
link |
along the lines of saying,
link |
if the pain is just as great in an animal,
link |
then it's just as bad that it happens as if it were a human.
link |
Maybe if I could ask, I apologize,
link |
hopefully it's not a ridiculous question,
link |
but so as far as we know,
link |
we cannot communicate with animals through natural language,
link |
but we would be able to communicate with robots.
link |
So I'm returning to sort of a small parallel
link |
between perhaps animals and the future of AI.
link |
If we do create an AGI system
link |
or as we approach creating that AGI system,
link |
what kind of questions would you ask her
link |
to try to intuit whether there is consciousness
link |
or more importantly, whether there's capacity to suffer?
link |
I might ask the AGI what she was feeling
link |
or does she have feelings?
link |
And if she says yes, to describe those feelings,
link |
to describe what they were like,
link |
to see what the phenomenal account of consciousness is like.
link |
That's one question.
link |
I might also try to find out if the AGI
link |
has a sense of itself.
link |
So for example, the idea would you,
link |
we often ask people,
link |
so suppose you were in a car accident
link |
and your brain were transplanted into someone else's body,
link |
do you think you would survive
link |
or would it be the person whose body was still surviving,
link |
your body having been destroyed?
link |
And most people say, I think I would,
link |
if my brain was transplanted along with my memories
link |
and so on, I would survive.
link |
So we could ask AGI those kinds of questions.
link |
If they were transferred to a different piece of hardware,
link |
would they survive?
link |
What would survive?
link |
And get at that sort of concept.
link |
Sort of on that line, another perhaps absurd question,
link |
but do you think having a body
link |
is necessary for consciousness?
link |
So do you think digital beings can suffer?
link |
Presumably digital beings need to be
link |
running on some kind of hardware, right?
link |
Yeah, that ultimately boils down to,
link |
but this is exactly what you just said,
link |
is moving the brain from one place to another.
link |
So you could move it to a different kind of hardware.
link |
And I could say, look, your hardware is getting worn out.
link |
We're going to transfer you to a fresh piece of hardware.
link |
So we're gonna shut you down for a time,
link |
but don't worry, you'll be running very soon
link |
on a nice fresh piece of hardware.
link |
And you could imagine this conscious AGI saying,
link |
that's fine, I don't mind having a little rest.
link |
Just make sure you don't lose me or something like that.
link |
Yeah, I mean, that's an interesting thought
link |
that even with us humans, the suffering is in the software.
link |
We right now don't know how to repair the hardware,
link |
but we're getting better at it and better in the idea.
link |
I mean, some people dream about one day being able
link |
to transfer certain aspects of the software
link |
to another piece of hardware.
link |
What do you think, just on that topic,
link |
there's been a lot of exciting innovation
link |
in brain computer interfaces.
link |
I don't know if you're familiar with the companies
link |
like Neuralink, with Elon Musk,
link |
communicating both ways from a computer,
link |
being able to send, activate neurons
link |
and being able to read spikes from neurons.
link |
With the dream of being able to expand,
link |
sort of increase the bandwidth at which your brain
link |
can like look up articles on Wikipedia kind of thing,
link |
sort of expand the knowledge capacity of the brain.
link |
Do you think that notion, is that interesting to you
link |
as the expansion of the human mind?
link |
Yes, that's very interesting.
link |
I'd love to be able to have that increased bandwidth.
link |
And I want better access to my memory, I have to say too,
link |
as I get older, I talk to my wife about things
link |
that we did 20 years ago or something.
link |
Her memory is often better about particular events.
link |
Who was at that event?
link |
What did he or she wear even?
link |
She may know and I have not the faintest idea about this,
link |
but perhaps it's somewhere in my memory.
link |
And if I had this extended memory,
link |
I could search that particular year and rerun those things.
link |
I think that would be great.
link |
In some sense, we already have that
link |
by storing so much of our data online,
link |
like pictures of different events.
link |
Yes, well, Gmail is fantastic for that
link |
because people email me as if they know me well
link |
and I haven't got a clue who they are,
link |
but then I search for their name.
link |
Ah yes, they emailed me in 2007
link |
and I know who they are now.
link |
Yeah, so we're taking the first steps already.
link |
So on the flip side of AI,
link |
people like Stuart Russell and others
link |
focus on the control problem, value alignment in AI,
link |
which is the problem of making sure we build systems
link |
that align to our own values, our ethics.
link |
Do you think sort of high level,
link |
how do we go about building systems?
link |
Do you think is it possible that align with our values,
link |
align with our human ethics or living being ethics?
link |
Presumably, it's possible to do that.
link |
I know that a lot of people who think
link |
that there's a real danger that we won't,
link |
that we'll more or less accidentally lose control of AGI.
link |
Do you have that fear yourself personally?
link |
I'm not quite sure what to think.
link |
I talk to philosophers like Nick Bostrom and Toby Ord
link |
and they think that this is a real problem
link |
we need to worry about.
link |
Then I talk to people who work for Microsoft
link |
or DeepMind or somebody and they say,
link |
no, we're not really that close to producing AGI,
link |
super intelligence.
link |
So if you look at Nick Bostrom,
link |
sort of the arguments, it's very hard to defend.
link |
So I'm of course, I am a self engineer AI system,
link |
so I'm more with the DeepMind folks
link |
where it seems that we're really far away,
link |
but then the counter argument is,
link |
is there any fundamental reason that we'll never achieve it?
link |
And if not, then eventually there'll be
link |
a dire existential risk.
link |
So we should be concerned about it.
link |
And do you find that argument at all appealing
link |
in this domain or any domain that eventually
link |
this will be a problem so we should be worried about it?
link |
Yes, I think it's a problem.
link |
I think that's a valid point.
link |
Of course, when you say eventually,
link |
that raises the question, how far off is that?
link |
And is there something that we can do about it now?
link |
Because if we're talking about
link |
this is gonna be 100 years in the future
link |
and you consider how rapidly our knowledge
link |
of artificial intelligence has grown
link |
in the last 10 or 20 years,
link |
it seems unlikely that there's anything much
link |
we could do now that would influence
link |
whether this is going to happen 100 years in the future.
link |
People in 80 years in the future
link |
would be in a much better position to say,
link |
this is what we need to do to prevent this happening
link |
So to some extent I find that reassuring,
link |
but I'm all in favor of some people doing research
link |
into this to see if indeed it is that far off
link |
or if we are in a position to do something about it sooner.
link |
I'm very much of the view that extinction
link |
is a terrible thing and therefore,
link |
even if the risk of extinction is very small,
link |
if we can reduce that risk,
link |
that's something that we ought to do.
link |
My disagreement with some of these people
link |
who talk about longterm risks, extinction risks,
link |
is only about how much priority that should have
link |
as compared to present questions.
link |
So essentially, if you look at the math of it
link |
from a utilitarian perspective,
link |
if it's existential risk, so everybody dies,
link |
that it feels like an infinity in the math equation,
link |
that that makes the math
link |
with the priorities difficult to do.
link |
That if we don't know the time scale
link |
and you can legitimately argue
link |
that it's nonzero probability that it'll happen tomorrow,
link |
that how do you deal with these kinds of existential risks
link |
like from nuclear war, from nuclear weapons,
link |
from biological weapons, from,
link |
I'm not sure if global warming falls into that category
link |
because global warming is a lot more gradual.
link |
And people say it's not an existential risk
link |
because there'll always be possibilities
link |
of some humans existing, farming Antarctica
link |
or northern Siberia or something of that sort, yeah.
link |
But you don't find the complete existential risks
link |
as a fundamental, like an overriding part
link |
of the equations of ethics, of what we should do.
link |
You know, certainly if you treat it as an infinity,
link |
then it plays havoc with any calculations.
link |
But arguably, we shouldn't.
link |
I mean, one of the ethical assumptions that goes into this
link |
is that the loss of future lives,
link |
that is of merely possible lives of beings
link |
who may never exist at all,
link |
is in some way comparable to the sufferings or deaths
link |
of people who do exist at some point.
link |
And that's not clear to me.
link |
I think there's a case for saying that,
link |
but I also think there's a case for taking the other view.
link |
So that has some impact on it.
link |
Of course, you might say, ah, yes,
link |
but still, if there's some uncertainty about this
link |
and the costs of extinction are infinite,
link |
then still, it's gonna overwhelm everything else.
link |
But I suppose I'm not convinced of that.
link |
I'm not convinced that it's really infinite here.
link |
And even Nick Bostrom, in his discussion of this,
link |
doesn't claim that there'll be
link |
an infinite number of lives lived.
link |
What is it, 10 to the 56th or something?
link |
It's a vast number that I think he calculates.
link |
This is assuming we can upload consciousness
link |
onto these digital forms,
link |
and therefore, they'll be much more energy efficient,
link |
but he calculates the amount of energy in the universe
link |
or something like that.
link |
So the numbers are vast but not infinite,
link |
which gives you some prospect maybe
link |
of resisting some of the argument.
link |
The beautiful thing with Nick's arguments
link |
is he quickly jumps from the individual scale
link |
to the universal scale,
link |
which is just awe inspiring to think of
link |
when you think about the entirety
link |
of the span of time of the universe.
link |
It's both interesting from a computer science perspective,
link |
AI perspective, and from an ethical perspective,
link |
the idea of utilitarianism.
link |
Could you say what is utilitarianism?
link |
Utilitarianism is the ethical view
link |
that the right thing to do is the act
link |
that has the greatest expected utility,
link |
where what that means is it's the act
link |
that will produce the best consequences,
link |
discounted by the odds that you won't be able
link |
to produce those consequences,
link |
that something will go wrong.
link |
But in simple case, let's assume we have certainty
link |
about what the consequences of our actions will be,
link |
then the right action is the action
link |
that will produce the best consequences.
link |
Is that always, and by the way,
link |
there's a bunch of nuanced stuff
link |
that you talk with Sam Harris on this podcast
link |
on that people should go listen to.
link |
That's like two hours of moral philosophy discussion.
link |
But is that an easy calculation?
link |
No, it's a difficult calculation.
link |
And actually, there's one thing that I need to add,
link |
and that is utilitarians, certainly the classical
link |
utilitarians, think that by best consequences,
link |
we're talking about happiness
link |
and the absence of pain and suffering.
link |
There are other consequentialists
link |
who are not really utilitarians who say
link |
there are different things that could be good consequences.
link |
Justice, freedom, human dignity,
link |
knowledge, they all count as good consequences too.
link |
And that makes the calculations even more difficult
link |
because then you need to know
link |
how to balance these things off.
link |
If you are just talking about wellbeing,
link |
using that term to express happiness
link |
and the absence of suffering,
link |
I think the calculation becomes more manageable
link |
in a philosophical sense.
link |
It's still in practice.
link |
We don't know how to do it.
link |
We don't know how to measure quantities
link |
of happiness and misery.
link |
We don't know how to calculate the probabilities
link |
that different actions will produce, this or that.
link |
So at best, we can use it as a rough guide
link |
to different actions and one where we have to focus
link |
on the short term consequences
link |
because we just can't really predict
link |
all of the longer term ramifications.
link |
So what about the extreme suffering of very small groups?
link |
Utilitarianism is focused on the overall aggregate, right?
link |
Would you say you yourself are a utilitarian?
link |
Yes, I'm a utilitarian.
link |
What do you make of the difficult, ethical,
link |
maybe poetic suffering of very few individuals?
link |
I think it's possible that that gets overridden
link |
by benefits to very large numbers of individuals.
link |
I think that can be the right answer.
link |
But before we conclude that it is the right answer,
link |
we have to know how severe the suffering is
link |
and how that compares with the benefits.
link |
So I tend to think that extreme suffering is worse than
link |
or is further, if you like, below the neutral level
link |
than extreme happiness or bliss is above it.
link |
So when I think about the worst experiences possible
link |
and the best experiences possible,
link |
I don't think of them as equidistant from neutral.
link |
So like it's a scale that goes from minus 100 through zero
link |
as a neutral level to plus 100.
link |
Because I know that I would not exchange an hour
link |
of my most pleasurable experiences
link |
for an hour of my most painful experiences,
link |
even I wouldn't have an hour
link |
of my most painful experiences even for two hours
link |
or 10 hours of my most painful experiences.
link |
Did I say that correctly?
link |
Yeah, yeah, yeah, yeah.
link |
Maybe 20 hours then, it's 21, what's the exchange rate?
link |
So that's the question, what is the exchange rate?
link |
But I think it can be quite high.
link |
So that's why you shouldn't just assume that
link |
it's okay to make one person suffer extremely
link |
in order to make two people much better off.
link |
It might be a much larger number.
link |
But at some point I do think you should aggregate
link |
and the result will be,
link |
even though it violates our intuitions of justice
link |
and fairness, whatever it might be,
link |
giving priority to those who are worse off,
link |
at some point I still think
link |
that will be the right thing to do.
link |
Yeah, it's some complicated nonlinear function.
link |
Can I ask a sort of out there question is,
link |
the more and more we put our data out there,
link |
the more we're able to measure a bunch of factors
link |
of each of our individual human lives.
link |
And I could foresee the ability to estimate wellbeing
link |
of whatever we together collectively agree
link |
and is in a good objective function
link |
from a utilitarian perspective.
link |
Do you think it'll be possible
link |
and is a good idea to push that kind of analysis
link |
to make then public decisions perhaps with the help of AI
link |
that here's a tax rate,
link |
here's a tax rate at which wellbeing will be optimized.
link |
Yeah, that would be great if we really knew that,
link |
if we really could calculate that.
link |
No, but do you think it's possible
link |
to converge towards an agreement amongst humans,
link |
towards an objective function
link |
or is it just a hopeless pursuit?
link |
I don't think it's hopeless.
link |
I think it would be difficult
link |
to get converged towards agreement, at least at present,
link |
because some people would say,
link |
I've got different views about justice
link |
and I think you ought to give priority
link |
to those who are worse off,
link |
even though I acknowledge that the gains
link |
that the worst off are making are less than the gains
link |
that those who are sort of medium badly off could be making.
link |
So we still have all of these intuitions that we argue about.
link |
So I don't think we would get agreement,
link |
but the fact that we wouldn't get agreement
link |
doesn't show that there isn't a right answer there.
link |
Do you think, who gets to say what is right and wrong?
link |
Do you think there's place for ethics oversight
link |
from the government?
link |
So I'm thinking in the case of AI,
link |
overseeing what kind of decisions AI can make or not,
link |
but also if you look at animal rights
link |
or rather not rights or perhaps rights,
link |
but the ideas you've explored in animal liberation,
link |
who gets to, so you eloquently and beautifully write
link |
in your book that this, you know, we shouldn't do this,
link |
but is there some harder rules that should be imposed
link |
or is this a collective thing we converse towards the society
link |
and thereby make the better and better ethical decisions?
link |
Politically, I'm still a Democrat
link |
despite looking at the flaws in democracy
link |
and the way it doesn't work always very well.
link |
So I don't see a better option
link |
than allowing the public to vote for governments
link |
in accordance with their policies.
link |
And I hope that they will vote for policies
link |
that reduce the suffering of animals
link |
and reduce the suffering of distant humans,
link |
whether geographically distant or distant
link |
because they're future humans.
link |
But I recognise that democracy
link |
isn't really well set up to do that.
link |
And in a sense, you could imagine a wise and benevolent,
link |
you know, omnibenevolent leader
link |
who would do that better than democracies could.
link |
But in the world in which we live,
link |
it's difficult to imagine that this leader
link |
isn't gonna be corrupted by a variety of influences.
link |
You know, we've had so many examples
link |
of people who've taken power with good intentions
link |
and then have ended up being corrupt
link |
and favouring themselves.
link |
So I don't know, you know, that's why, as I say,
link |
I don't know that we have a better system
link |
than democracy to make these decisions.
link |
Well, so you also discuss effective altruism,
link |
which is a mechanism for going around government
link |
for putting the power in the hands of the people
link |
to donate money towards causes to help, you know,
link |
remove the middleman and give it directly
link |
to the causes that they care about.
link |
Sort of, maybe this is a good time to ask,
link |
you've, 10 years ago, wrote The Life You Can Save,
link |
that's now, I think, available for free online?
link |
That's right, you can download either the ebook
link |
or the audiobook free from the lifeyoucansave.org.
link |
And what are the key ideas that you present
link |
The main thing I wanna do in the book
link |
is to make people realise that it's not difficult
link |
to help people in extreme poverty,
link |
that there are highly effective organisations now
link |
that are doing this, that they've been independently assessed
link |
and verified by research teams that are expert in this area
link |
and that it's a fulfilling thing to do
link |
to, for at least part of your life, you know,
link |
we can't all be saints, but at least one of your goals
link |
should be to really make a positive contribution
link |
to the world and to do something to help people
link |
who through no fault of their own
link |
are in very dire circumstances and living a life
link |
that is barely or perhaps not at all
link |
a decent life for a human being to live.
link |
So you describe a minimum ethical standard of giving.
link |
What advice would you give to people
link |
that want to be effectively altruistic in their life,
link |
like live an effective altruism life?
link |
There are many different kinds of ways of living
link |
as an effective altruist.
link |
And if you're at the point where you're thinking
link |
about your long term career, I'd recommend you take a look
link |
at a website called 80,000Hours, 80,000Hours.org,
link |
which looks at ethical career choices.
link |
And they range from, for example,
link |
going to work on Wall Street
link |
so that you can earn a huge amount of money
link |
and then donate most of it to effective charities
link |
to going to work for a really good nonprofit organization
link |
so that you can directly use your skills and ability
link |
and hard work to further a good cause,
link |
or perhaps going into politics, maybe small chances,
link |
but big payoffs in politics,
link |
go to work in the public service
link |
where if you're talented, you might rise to a high level
link |
where you can influence decisions,
link |
do research in an area where the payoffs could be great.
link |
There are a lot of different opportunities,
link |
but too few people are even thinking about those questions.
link |
They're just going along in some sort of preordained rut
link |
to particular careers.
link |
Maybe they think they'll earn a lot of money
link |
and have a comfortable life,
link |
but they may not find that as fulfilling
link |
as actually knowing that they're making
link |
a positive difference to the world.
link |
What about in terms of,
link |
so that's like long term, 80,000 hours,
link |
sort of shorter term giving part of,
link |
well, actually it's a part of that.
link |
You go to work at Wall Street,
link |
if you would like to give a percentage of your income
link |
that you talk about and life you can save that.
link |
I mean, I was looking through, it's quite a compelling,
link |
I mean, I'm just a dumb engineer,
link |
so I like, there's simple rules, there's a nice percentage.
link |
Okay, so I do actually set out suggested levels of giving
link |
because people often ask me about this.
link |
A popular answer is give 10%, the traditional tithe
link |
that's recommended in Christianity and also Judaism.
link |
But why should it be the same percentage
link |
irrespective of your income?
link |
Tax scales reflect the idea that the more income you have,
link |
the more you can pay tax.
link |
And I think the same is true in what you can give.
link |
So I do set out a progressive donor scale,
link |
which starts out at 1% for people on modest incomes
link |
and rises to 33 and a third percent
link |
for people who are really earning a lot.
link |
And my idea is that I don't think any of these amounts
link |
really impose real hardship on people
link |
because they are progressive and geared to income.
link |
So I think anybody can do this
link |
and can know that they're doing something significant
link |
to play their part in reducing the huge gap
link |
between people in extreme poverty in the world
link |
and people living affluent lives.
link |
And aside from it being an ethical life,
link |
it's one that you find more fulfilling
link |
because there's something about our human nature that,
link |
or some of our human natures,
link |
maybe most of our human nature that enjoys doing
link |
the ethical thing.
link |
Yes, I make both those arguments,
link |
that it is an ethical requirement
link |
in the kind of world we live in today
link |
to help people in great need when we can easily do so,
link |
but also that it is a rewarding thing
link |
and there's good psychological research showing
link |
that people who give more tend to be more satisfied
link |
And I think this has something to do
link |
with having a purpose that's larger than yourself
link |
and therefore never being, if you like,
link |
never being bored sitting around,
link |
oh, you know, what will I do next?
link |
I've got nothing to do.
link |
In a world like this, there are many good things
link |
that you can do and enjoy doing them.
link |
Plus you're working with other people
link |
in the effective altruism movement
link |
who are forming a community of other people
link |
with similar ideas and they tend to be interesting,
link |
thoughtful and good people as well.
link |
And having friends of that sort is another big contribution
link |
to having a good life.
link |
So we talked about big things that are beyond ourselves,
link |
but we're also just human and mortal.
link |
Do you ponder your own mortality?
link |
Is there insights about your philosophy,
link |
the ethics that you gain from pondering your own mortality?
link |
Clearly, you know, as you get into your 70s,
link |
you can't help thinking about your own mortality.
link |
Uh, but I don't know that I have great insights
link |
into that from my philosophy.
link |
I don't think there's anything after the death of my body,
link |
you know, assuming that we won't be able to upload my mind
link |
into anything at the time when I die.
link |
So I don't think there's any afterlife
link |
or anything to look forward to in that sense.
link |
Do you fear death?
link |
So if you look at Ernest Becker
link |
and describing the motivating aspects
link |
of our ability to be cognizant of our mortality,
link |
do you have any of those elements
link |
in your drive and your motivation in life?
link |
I suppose the fact that you have only a limited time
link |
to achieve the things that you want to achieve
link |
gives you some sort of motivation
link |
to get going and achieving them.
link |
And if we thought we were immortal,
link |
we might say, ah, you know,
link |
I can put that off for another decade or two.
link |
So there's that about it.
link |
But otherwise, you know, no,
link |
I'd rather have more time to do more.
link |
I'd also like to be able to see how things go
link |
that I'm interested in, you know.
link |
Is climate change gonna turn out to be as dire
link |
as a lot of scientists say that it is going to be?
link |
Will we somehow scrape through
link |
with less damage than we thought?
link |
I'd really like to know the answers to those questions,
link |
but I guess I'm not going to.
link |
Well, you said there's nothing afterwards.
link |
So let me ask the even more absurd question.
link |
What do you think is the meaning of it all?
link |
I think the meaning of life is the meaning we give to it.
link |
I don't think that we were brought into the universe
link |
for any kind of larger purpose.
link |
But given that we exist,
link |
I think we can recognize that some things
link |
are objectively bad.
link |
Extreme suffering is an example,
link |
and other things are objectively good,
link |
like having a rich, fulfilling, enjoyable,
link |
pleasurable life, and we can try to do our part
link |
in reducing the bad things and increasing the good things.
link |
So one way, the meaning is to do a little bit more
link |
of the good things, objectively good things,
link |
and a little bit less of the bad things.
link |
Yes, so do as much of the good things as you can
link |
and as little of the bad things.
link |
You beautifully put, I don't think there's a better place
link |
to end it, thank you so much for talking today.
link |
Thanks very much, Lex.
link |
It's been really interesting talking to you.
link |
Thanks for listening to this conversation
link |
with Peter Singer, and thank you to our sponsors,
link |
Cash App and Masterclass.
link |
Please consider supporting the podcast
link |
by downloading Cash App and using the code LexPodcast,
link |
and signing up at masterclass.com slash Lex.
link |
Click the links, buy all the stuff.
link |
It's the best way to support this podcast
link |
and the journey I'm on in my research and startup.
link |
If you enjoy this thing, subscribe on YouTube,
link |
review it with 5,000 Apple Podcast, support on Patreon,
link |
or connect with me on Twitter at Lex Friedman,
link |
spelled without the E, just F R I D M A N.
link |
And now, let me leave you with some words
link |
from Peter Singer, what one generation finds ridiculous,
link |
the next accepts, and the third shudders
link |
when looks back at what the first did.
link |
Thank you for listening, and hope to see you next time.