back to indexWilliam MacAskill: Effective Altruism | Lex Fridman Podcast #84
link |
The following is a conversation with William McCaskill.
link |
He's a philosopher, ethicist, and one of the originators of the effective altruism movement.
link |
His research focuses on the fundamentals of effective altruism,
link |
or the use of evidence and reason to help others by as much as possible with our time and money,
link |
with a particular concentration on how to act given moral uncertainty.
link |
He's the author of Doing Good, Better, Effective Altruism,
link |
and a Radical New Way to Make a Difference.
link |
He is a cofounder and the president of the Center of Effective Altruism, CEA,
link |
that encourages people to commit to donate at least 10% of their income to the most effective charities.
link |
He cofounded 80,000 Hours, which is a nonprofit that provides research and advice
link |
on how you can best make a difference through your career.
link |
This conversation was recorded before the outbreak of the coronavirus pandemic.
link |
For everyone feeling the medical, psychological, and financial burden of this crisis,
link |
I'm sending love your way.
link |
Stay strong. We're in this together. We'll beat this thing.
link |
This is the Artificial Intelligence Podcast.
link |
If you enjoy it, subscribe on YouTube, review it with five stars on Apple Podcast,
link |
support it on Patreon, or simply connect with me on Twitter at Lex Friedman, spelled F R I D M A N.
link |
As usual, I'll do one or two minutes of ads now,
link |
and never any ads in the middle that can break the flow of the conversation.
link |
I hope that works for you and doesn't hurt the listening experience.
link |
This show is presented by Cash App, the number one finance app in the App Store.
link |
When you get it, use code LEXPODCAST.
link |
Cash App lets you send money to friends, buy Bitcoin, and invest in the stock market with as little as $1.
link |
Since Cash App allows you to send and receive money digitally, peer to peer,
link |
and security in all digital transactions is very important,
link |
let me mention the PCI data security standard that Cash App is compliant with.
link |
I'm a big fan of standards for safety and security.
link |
PCI DSS is a good example of that,
link |
where a bunch of competitors got together and agreed
link |
that there needs to be a global standard around the security of transactions.
link |
Now, we just need to do the same for autonomous vehicles and AI systems in general.
link |
So again, if you get Cash App from the App Store or Google Play,
link |
and use the code LEXPODCAST, you get $10, and Cash App will also donate $10 to FIRST,
link |
an organization that is helping to advance robotics and STEM education for young people around the world.
link |
And now, here's my conversation with William McCaskill.
link |
What does utopia for humans and all life on Earth look like for you?
link |
That's a great question.
link |
What I want to say is that we don't know,
link |
and the utopia we want to get to is an indirect one that I call the long reflection.
link |
So, period of post scarcity, no longer have the kind of urgent problems we have today,
link |
but instead can spend, perhaps it's tens of thousands of years debating,
link |
engaging in ethical reflection in order, before we take any kind of drastic lock in,
link |
actions like spreading to the stars,
link |
and then we can figure out what is of kind of moral value.
link |
The long reflection, that's a really beautiful term.
link |
So, if we look at Twitter for just a second,
link |
do you think human beings are able to reflect in a productive way?
link |
I don't mean to make it sound bad,
link |
because there is a lot of fights and politics and division in our discourse.
link |
Maybe if you zoom out, it actually is civilized discourse.
link |
It might not feel like it, but when you zoom out.
link |
So, I don't want to say that Twitter is not civilized discourse.
link |
I actually believe it.
link |
It's more civilized than people give it credit for.
link |
But do you think the long reflection can actually be stable,
link |
where we as human beings with our descendant of eight brains
link |
would be able to sort of rationally discuss things together and arrive at ideas?
link |
I think, overall, we're pretty good at discussing things rationally,
link |
and at least in the earlier stages of our lives being open to many different ideas,
link |
and being able to be convinced and change our views.
link |
I think that Twitter is designed almost to bring out all the worst tendencies.
link |
So, if the long reflection were conducted on Twitter,
link |
maybe it would be better just not even to bother.
link |
But I think the challenge really is getting to a stage
link |
where we have a society that is as conducive as possible
link |
to rational reflection, to deliberation.
link |
I think we're actually very lucky to be in a liberal society
link |
where people are able to discuss a lot of ideas and so on.
link |
I think when we look to the future,
link |
that's not at all guaranteed that society would be like that,
link |
rather than a society where there's a fixed canon of values
link |
that are being imposed on all of society,
link |
and where you aren't able to question that.
link |
That would be very bad from my perspective,
link |
because it means we wouldn't be able to figure out what the truth is.
link |
I can already sense we're going to go down a million tangents,
link |
but what do you think is the...
link |
If Twitter is not optimal,
link |
what kind of mechanism in this modern age of technology
link |
can we design where the exchange of ideas could be both civilized and productive,
link |
and yet not be too constrained
link |
where there's rules of what you can say and can't say,
link |
which is, as you say, is not desirable,
link |
but yet not have some limits as to what can be said or not and so on?
link |
Do you have any ideas, thoughts on the possible future?
link |
Of course, nobody knows how to do it,
link |
but do you have thoughts of what a better Twitter might look like?
link |
I think that text based media are intrinsically going to be very hard
link |
to be conducive to rational discussion,
link |
because if you think about it from an informational perspective,
link |
if I just send you a text of less than,
link |
what is it now, 240 characters, 280 characters, I think,
link |
that's a tiny amount of information compared to, say, you and I talking now,
link |
where you have access to the words I say, which is the same as in text,
link |
but also my tone, also my body language,
link |
and we're very poorly designed to be able to assess...
link |
I have to read all of this context into anything you say,
link |
so maybe your partner sends you a text and has a full stop at the end.
link |
Are they mad at you?
link |
You have to infer everything about this person's mental state
link |
from whether they put a full stop at the end of a text or not.
link |
Well, the flip side of that is it truly text that's the problem here,
link |
because there's a viral aspect to the text,
link |
where you could just post text nonstop.
link |
It's very immediate.
link |
The times before Twitter, before the internet,
link |
the way you would exchange texts is you would write books.
link |
And that, while it doesn't get body language, it doesn't get tone, it doesn't...
link |
so on, but it does actually boil down after some time of thinking,
link |
some editing, and so on, boil down ideas.
link |
So is the immediacy and the viral nature,
link |
which produces the outrage mobs and so on, the potential problem?
link |
I think that is a big issue.
link |
I think there's going to be this strong selection effect where
link |
something that provokes outrage, well, that's high arousal,
link |
you're more likely to retweet that,
link |
whereas kind of sober analysis is not as sexy, not as viral.
link |
I do agree that long form content is much better to productive discussion.
link |
In terms of the media that are very popular at the moment,
link |
I think that podcasting is great where your podcasts are two hours long,
link |
so they're much more in depth than Twitter are,
link |
and you are able to convey so much more nuance,
link |
so much more caveat, because it's an actual conversation.
link |
It's more like the sort of communication that we've evolved to do,
link |
rather than these very small little snippets of ideas that,
link |
when also combined with bad incentives,
link |
just clearly aren't designed for helping us get to the truth.
link |
It's kind of interesting that it's not just the length of the podcast medium,
link |
but it's the fact that it was started by people that don't give a damn about
link |
quote unquote demand, that there's a relaxed,
link |
sort of the style that Joe Rogan does,
link |
there's a freedom to express ideas
link |
in an unconstrained way that's very real.
link |
It's kind of funny that it feels so refreshingly real to us today,
link |
and I wonder what the future looks like.
link |
It's a little bit sad now that quite a lot of sort of more popular people
link |
are getting into podcasting,
link |
and they try to sort of create, they try to control it,
link |
they try to constrain it in different kinds of ways.
link |
People I love, like Conan O Brien and so on, different comedians,
link |
and I'd love to see where the real aspects of this podcasting medium persist,
link |
maybe in TV, maybe in YouTube,
link |
maybe Netflix is pushing those kind of ideas,
link |
and it's kind of, it's a really exciting word,
link |
that kind of sharing of knowledge.
link |
Yeah, I mean, I think it's a double edged sword
link |
as it becomes more popular and more profitable,
link |
where on the one hand you'll get a lot more creativity,
link |
people doing more interesting things with the medium,
link |
but also perhaps you get this place to the bottom
link |
where suddenly maybe it'll be hard to find good content on podcasts
link |
because it'll be so overwhelmed by the latest bit of viral outrage.
link |
So speaking of that, jumping on Effective Altruism for a second,
link |
so much of that internet content is funded by advertisements.
link |
Just in the context of Effective Altruism,
link |
we're talking about the richest companies in the world,
link |
they're funded by advertisements essentially,
link |
Google, that's their primary source of income.
link |
Do you see that as,
link |
do you have any criticism of that source of income?
link |
Do you see that source of money
link |
as a potentially powerful source of money that could be used,
link |
well, certainly could be used for good,
link |
but is there something bad about that source of money?
link |
I think there's significant worries with it,
link |
where it means that the incentives of the company
link |
might be quite misaligned with making people's lives better,
link |
where again, perhaps the incentives are towards increasing drama
link |
and debate on your social media feed
link |
in order that more people are going to be engaged,
link |
perhaps compulsively involved with the platform.
link |
Whereas there are other business models
link |
like having an opt in subscription service
link |
where perhaps they have other issues,
link |
but there's much more of an incentive to provide a product
link |
that its users are just really wanting,
link |
because now I'm paying for this product.
link |
I'm paying for this thing that I want to buy
link |
rather than I'm trying to use this thing
link |
and it's going to get a profit mechanism
link |
that is somewhat orthogonal to me
link |
actually just wanting to use the product.
link |
And so, I mean, in some cases it'll work better than others.
link |
I can imagine, I can in theory imagine Facebook
link |
having a subscription service,
link |
but I think it's unlikely to happen anytime soon.
link |
Well, it's interesting and it's weird
link |
now that you bring it up that it's unlikely.
link |
For example, I pay I think 10 bucks a month for YouTube Red
link |
and I don't think I get it much for that
link |
except just for no ads,
link |
but in general it's just a slightly better experience.
link |
And I would gladly, now I'm not wealthy,
link |
in fact I'm operating very close to zero dollars,
link |
but I would pay 10 bucks a month to Facebook
link |
and 10 bucks a month to Twitter
link |
for some kind of more control
link |
in terms of advertisements and so on.
link |
But the other aspect of that is data, personal data.
link |
People are really sensitive about this
link |
and I as one who hopes to one day
link |
create a company that may use people's data
link |
to do good for the world,
link |
wonder about this.
link |
One, the psychology of why people are so paranoid.
link |
Well, I understand why,
link |
but they seem to be more paranoid
link |
than is justified at times.
link |
And the other is how do you do it right?
link |
So it seems that Facebook is,
link |
it seems that Facebook is doing it wrong.
link |
That's certainly the popular narrative.
link |
It's unclear to me actually how wrong.
link |
Like I tend to give them more benefit of the doubt
link |
because it's a really hard thing to do right
link |
and people don't necessarily realize it,
link |
but how do we respect in your view people's privacy?
link |
Yeah, I mean in the case of how worried are people
link |
about using their data,
link |
I mean there's a lot of public debate
link |
and criticism about it.
link |
When we look at people's revealed preferences,
link |
people's continuing massive use
link |
of these sorts of services.
link |
It's not clear to me how much people really do care.
link |
Perhaps they care a bit,
link |
but they're happy to in effect kind of sell their data
link |
in order to be able to kind of use a certain service.
link |
That's a great term, revealed preferences.
link |
So these aren't preferences you self report in the survey.
link |
This is like your actions speak.
link |
oh yeah, I hate the idea of Facebook having my data.
link |
But then when it comes to it,
link |
you actually are willing to give that data in exchange
link |
for being able to use the service.
link |
And if that's the case,
link |
then I think unless we have some explanation
link |
about why there's some negative externality from that
link |
or why there's some coordination failure,
link |
or if there's something that consumers
link |
are just really misled about
link |
where they don't realize why giving away data like this
link |
is a really bad thing to do,
link |
then ultimately I kind of want to,
link |
you know, respect people's preferences.
link |
They can give away their data if they want.
link |
I think there's a big difference
link |
between companies use of data
link |
and governments having data where,
link |
you know, looking at the track record of history,
link |
governments knowing a lot about their people can be very bad
link |
if the government chooses to do bad things with it.
link |
And that's more worrying, I think.
link |
So let's jump into it a little bit.
link |
Most people know, but actually I, two years ago,
link |
had no idea what effective altruism was
link |
until I saw there was a cool looking event
link |
in an MIT group here.
link |
I think it's called the Effective Altruism Club or a group.
link |
I was like, what the heck is that?
link |
And one of my friends said,
link |
I mean, he said that they're just
link |
a bunch of eccentric characters.
link |
So I was like, hell yes, I'm in.
link |
So I went to one of their events
link |
and looked up what's it about.
link |
It's quite a fascinating philosophical
link |
and just a movement of ideas.
link |
So can you tell me what is effective altruism?
link |
Great, so the core of effective altruism
link |
is about trying to answer this question,
link |
which is how can I do as much good as possible
link |
with my scarce resources, my time and with my money?
link |
And then once we have our best guess answers to that,
link |
trying to take those ideas and put that into practice,
link |
and do those things that we believe will do the most good.
link |
And we're now a community of people,
link |
many thousands of us around the world,
link |
who really are trying to answer that question
link |
as best we can and then use our time and money
link |
to make the world better.
link |
So what's the difference between sort of
link |
classical general idea of altruism
link |
and effective altruism?
link |
So normally when people try to do good,
link |
they often just aren't so reflective about those attempts.
link |
So someone might approach you on the street
link |
asking you to give to charity.
link |
And if you're feeling altruistic,
link |
you'll give to the person on the street.
link |
Or if you think, oh, I wanna do some good in my life,
link |
you might volunteer at a local place.
link |
Or perhaps you'll decide, pursue a career
link |
where you're working in a field
link |
that's kind of more obviously beneficial
link |
like being a doctor or a nurse or a healthcare professional.
link |
But it's very rare that people apply the same level
link |
of rigor and analytical thinking
link |
to lots of other areas we think about.
link |
So take the case of someone approaching you on the street.
link |
Imagine if that person instead was saying,
link |
hey, I've got this amazing company.
link |
Do you want to invest in it?
link |
It would be insane.
link |
No one would ever think, oh, of course,
link |
I'm just a company like you'd think it was a scam.
link |
But somehow we don't have that same level of rigor
link |
when it comes to doing good,
link |
even though the stakes are more important
link |
when it comes to trying to help others
link |
than trying to make money for ourselves.
link |
Well, first of all, so there is a psychology
link |
at the individual level of doing good just feels good.
link |
And so in some sense, on that pure psychological part,
link |
it doesn't matter.
link |
In fact, you don't wanna know if it does good or not
link |
because most of the time it won't.
link |
So like in a certain sense,
link |
it's understandable why altruism
link |
without the effective part is so appealing
link |
to a certain population.
link |
By the way, let's zoom off for a second.
link |
Do you think most people, two questions.
link |
Do you think most people are good?
link |
And question number two is,
link |
do you think most people wanna do good?
link |
So are most people good?
link |
I think it's just super dependent
link |
on the circumstances that someone is in.
link |
I think that the actions people take
link |
and their moral worth is just much more dependent
link |
on circumstance than it is on someone's intrinsic character.
link |
So is there evil within all of us?
link |
It seems like with the better angels of our nature,
link |
there's a tendency of us as a society
link |
to tend towards good, less war.
link |
I mean, with all these metrics.
link |
Is that us becoming who we want to be
link |
or is that some kind of societal force?
link |
What's the nature versus nurture thing here?
link |
Yeah, so in that case, I just think,
link |
yeah, so violence has massively declined over time.
link |
I think that's a slow process of cultural evolution,
link |
institutional evolution such that now the incentives
link |
for you and I to be violent are very, very small indeed.
link |
In contrast, when we were hunter gatherers,
link |
the incentives were quite large.
link |
If there was someone who was potentially disturbing
link |
the social order and hunter gatherer setting,
link |
there was a very strong incentive to kill that person
link |
and people did and it was just the guarded 10% of deaths
link |
among hunter gatherers were murders.
link |
After hunter gatherers, when you have actual societies
link |
is when violence can probably go up
link |
because there's more incentive to do mass violence, right?
link |
To take over, conquer other people's lands
link |
and murder everybody in place and so on.
link |
Yeah, I mean, I think total death rate
link |
from human causes does go down,
link |
but you're right that if you're in a hunter gatherer situation
link |
you're kind of a group that you're part of is very small
link |
then you can't have massive wars
link |
that just massive communities don't exist.
link |
But anyway, the second question,
link |
do you think most people want to do good?
link |
Yeah, and then I think that is true for most people.
link |
I think you see that with the fact that most people donate,
link |
a large proportion of people volunteer.
link |
If you give people opportunities
link |
to easily help other people, they will take it.
link |
But at the same time,
link |
we're a product of our circumstances
link |
and if it were more socially awarded to be doing more good,
link |
if it were more socially awarded to do good effectively
link |
rather than not effectively,
link |
then we would see that behavior a lot more.
link |
So why should we do good?
link |
Yeah, my answer to this is
link |
there's no kind of deeper level of explanation.
link |
So my answer to kind of why should you do good is
link |
well, there is someone whose life is on the line,
link |
for example, whose life you can save
link |
via donating just actually a few thousand dollars
link |
to an effective nonprofit
link |
like the Against Malaria Foundation.
link |
That is a sufficient reason to do good.
link |
And then if you ask, well, why ought I to do that?
link |
I'm like, I just show you the same facts again.
link |
It's that fact that is the reason to do good.
link |
There's nothing more fundamental than that.
link |
I'd like to sort of make more concrete
link |
the thing we're trying to make better.
link |
So you just mentioned malaria.
link |
So there's a huge amount of suffering in the world.
link |
Are we trying to remove?
link |
So is ultimately the goal, not ultimately,
link |
but the first step is to remove the worst of the suffering.
link |
So there's some kind of threshold of suffering
link |
that we want to make sure does not exist in the world.
link |
Or do we really naturally want to take a much further step
link |
and look at things like income inequality?
link |
So not just getting everybody above a certain threshold,
link |
but making sure that there's some,
link |
that broadly speaking,
link |
there's less injustice in the world, unfairness,
link |
in some definition, of course,
link |
very difficult to define a fairness.
link |
Yeah, so the metric I use is how many people do we affect
link |
and by how much do we affect them?
link |
And so that can, often that means eliminating suffering,
link |
but it doesn't have to,
link |
could be helping promote a flourishing life instead.
link |
And so if I was comparing reducing income inequality
link |
or getting people from the very pits of suffering
link |
to a higher level,
link |
the question I would ask is just a quantitative one
link |
of just if I do this first thing or the second thing,
link |
how many people am I going to benefit
link |
and by how much am I going to benefit?
link |
Am I going to move that one person from kind of 10%,
link |
0% well being to 10% well being?
link |
Perhaps that's just not as good as moving a hundred people
link |
from 10% well being to 50% well being.
link |
And the idea is the diminishing returns is the idea of
link |
when you're in terrible poverty,
link |
then the $1 that you give goes much further
link |
than if you were in the middle class in the United States,
link |
And this fact is really striking.
link |
So if you take even just quite a conservative estimate
link |
of how we are able to turn money into well being,
link |
the economists put it as like a log curve.
link |
That's the or steeper.
link |
But that means that any proportional increase
link |
in your income has the same impact on your well being.
link |
And so someone moving from $1,000 a year
link |
to $2,000 a year has the same impact
link |
as someone moving from $100,000 a year to $200,000 a year.
link |
And then when you combine that with the fact that we
link |
in middle class members of rich countries are 100 times richer
link |
than financial terms in the global poor,
link |
that means we can do a hundred times to benefit the poorest people
link |
in the world as we can to benefit people of our income level.
link |
And that's this astonishing fact.
link |
Yeah, it's quite incredible.
link |
A lot of these facts and ideas are just difficult to think about
link |
because there's an overwhelming amount of suffering in the world.
link |
And even acknowledging it is difficult.
link |
Not exactly sure why that is.
link |
I mean, I mean, it's difficult because you have to bring to mind,
link |
you know, it's an unpleasant experience thinking
link |
about other people's suffering.
link |
It's unpleasant to be empathizing with it, firstly.
link |
And then secondly, thinking about it means
link |
that maybe we'd have to change our lifestyles.
link |
And if you're very attached to the income that you've got,
link |
perhaps you don't want to be confronting ideas or arguments
link |
that might cause you to use some of that money to help others.
link |
So it's quite understandable in the psychological terms,
link |
even if it's not the right thing that we ought to be doing.
link |
So how can we do better?
link |
How can we be more effective?
link |
How does data help?
link |
Yeah, in general, how can we do better?
link |
It's definitely hard.
link |
And we have spent the last 10 years engaged in kind of some deep research projects,
link |
to try and answer kind of two questions.
link |
One is, of all the many problems the world is facing,
link |
what are the problems we ought to be focused on?
link |
And then within those problems that we judge to be kind of the most pressing,
link |
where we use this idea of focusing on problems that are the biggest in scale,
link |
that are the most tractable,
link |
where we can make the most progress on that problem,
link |
and that are the most neglected.
link |
Within them, what are the things that have the kind of best evidence,
link |
or we have the best guess, will do the most good.
link |
And so we have a bunch of organizations.
link |
So GiveWell, for example, is focused on global health and development,
link |
and has a list of seven top recommended charities.
link |
So the idea in general, and sorry to interrupt,
link |
is, so we'll talk about sort of poverty and animal welfare and existential risk.
link |
Those are all fascinating topics, but in general,
link |
the idea is there should be a group,
link |
sorry, there's a lot of groups that seek to convert money into good.
link |
And then you also on top of that want to have a accounting
link |
of how good they actually perform that conversion,
link |
how well they did in converting money to good.
link |
So ranking of these different groups,
link |
ranking these charities.
link |
So does that apply across basically all aspects of effective altruism?
link |
So there should be a group of people,
link |
and they should report on certain metrics of how well they've done,
link |
and you should only give your money to groups that do a good job.
link |
That's the core idea. I'd make two comments.
link |
One is just, it's not just about money.
link |
So we're also trying to encourage people to work in areas
link |
where they'll have the biggest impact.
link |
And in some areas, you know, they're really people heavy, but money poor.
link |
Other areas are kind of money rich and people poor.
link |
And so whether it's better to focus time or money depends on the cause area.
link |
And then the second is that you mentioned metrics,
link |
and while that's the ideal, and in some areas we do,
link |
we are able to get somewhat quantitative information
link |
about how much impact an area is having.
link |
That's not always true.
link |
For some of the issues, like you mentioned existential risks,
link |
well, we're not able to measure in any sort of precise way
link |
like how much progress we're making.
link |
And so you have to instead fall back on just rigorous argument and evaluation,
link |
even in the absence of data.
link |
So let's first sort of linger on your own story for a second.
link |
How do you yourself practice effective altruism in your own life?
link |
Because I think that's a really interesting place to start.
link |
So I've tried to build effective altruism into at least many components of my life.
link |
So on the donation side, my plan is to give away most of my income
link |
over the course of my life.
link |
I've set a bar I feel happy with and I just donate above that bar.
link |
So at the moment, I donate about 20% of my income.
link |
Then on the career side, I've also shifted kind of what I do,
link |
where I was initially planning to work on very esoteric topics
link |
in the philosophy of logic, philosophy of language,
link |
things that are intellectually extremely interesting,
link |
but the path by which they really make a difference to the world is,
link |
let's just say it's very unclear at best.
link |
And so I switched instead to researching ethics to actually just working
link |
on this question of how we can do as much good as possible.
link |
And then I've also spent a very large chunk of my life over the last 10 years
link |
creating a number of nonprofits who again in different ways
link |
are tackling this question of how we can do the most good
link |
and helping them to grow over time too.
link |
Yeah, we mentioned a few of them with the career selection, 80,000.
link |
80,000 hours is a really interesting group.
link |
So maybe also just a quick pause on the origins of effective altruism
link |
because you paint a picture who the key figures are,
link |
including yourself in the effective altruism movement today.
link |
Yeah, there are two main strands that kind of came together
link |
to form the effective altruism movement.
link |
So one was two philosophers, myself and Toby Ord at Oxford,
link |
and we had been very influenced by the work of Peter Singer,
link |
an Australian model philosopher who had argued for many decades
link |
that because one can do so much good at such little cost to oneself,
link |
we have an obligation to give away most of our income
link |
to benefit those in extreme poverty,
link |
just in the same way that we have an obligation to run in
link |
and save a child from drowning in a shallow pond
link |
if it would just ruin your suit that cost a few thousand dollars.
link |
And we set up Giving What We Can in 2009,
link |
which is encouraging people to give at least 10% of their income
link |
to the most effective charities.
link |
And the second main strand was the formation of GiveWell,
link |
which was originally based in New York and started in about 2007.
link |
And that was set up by Holden Carnovsky and Elie Hassenfeld,
link |
who were two hedge fund dudes who were making good money
link |
and thinking, well, where should I donate?
link |
And in the same way as if they wanted to buy a product for themselves,
link |
they would look at Amazon reviews.
link |
They were like, well, what are the best charities?
link |
Found they just weren't really good answers to that question,
link |
certainly not that they were satisfied with.
link |
And so they formed GiveWell in order to try and work out
link |
what are those charities where they can have the biggest impact.
link |
And then from there and some other influences,
link |
kind of community grew and spread.
link |
Can we explore the philosophical and political space
link |
that effective altruism occupies a little bit?
link |
So from the little and distant in my own lifetime
link |
that I've read of Ayn Rand's work, Ayn Rand's philosophy of objectivism,
link |
espouses, and it's interesting to put her philosophy in contrast
link |
with effective altruism.
link |
So it espouses selfishness as the best thing you can do.
link |
But it's not actually against altruism.
link |
It's just you have that choice, but you should be selfish in it, right?
link |
Or not, maybe you can disagree here.
link |
But so it can be viewed as the complete opposite of effective altruism
link |
or it can be viewed as similar because the word effective is really interesting.
link |
Because if you want to do good, then you should be damn good at doing good, right?
link |
I think that would fit within the morality that's defined by objectivism.
link |
So do you see a connection between these two philosophies
link |
and other perhaps in this complicated space of beliefs
link |
that effective altruism is positioned as opposing or aligned with?
link |
I would definitely say that objectivism, Ayn Rand's philosophy,
link |
is a philosophy that's quite fundamentally opposed to effective altruism.
link |
Insofar as Ayn Rand's philosophy is about championing egoism
link |
and saying that I'm never quite sure whether the philosophy is meant to say
link |
that just you ought to do whatever will best benefit yourself,
link |
that's ethical egoism, no matter what the consequences are.
link |
Or second, if there's this alternative view, which is, well,
link |
you ought to try and benefit yourself because that's actually the best way
link |
of benefiting society.
link |
Certainly, in Atlas Shalaguchi is presenting her philosophy
link |
as a way that's actually going to bring about a flourishing society.
link |
And if it's the former, then well, effective altruism is all about promoting
link |
the idea of altruism and saying, in fact,
link |
we ought to really be trying to help others as much as possible.
link |
So it's opposed there.
link |
And then on the second side, I would just dispute the empirical premise.
link |
It would seem, given the major problems in the world today,
link |
it would seem like this remarkable coincidence,
link |
quite suspicious, one might say, if benefiting myself was actually
link |
the best way to bring about a better world.
link |
So on that point, and I think that connects also with career selection
link |
that we'll talk about, but let's consider not objectives, but capitalism.
link |
And the idea that you focusing on the thing that you are damn good at,
link |
whatever that is, may be the best thing for the world.
link |
Part of it is also mindset, right?
link |
The thing I love is robots.
link |
So maybe I should focus on building robots
link |
and never even think about the idea of effective altruism,
link |
which is kind of the capitalist notion.
link |
Is there any value in that idea in just finding the thing you're good at
link |
and maximizing your productivity in this world
link |
and thereby sort of lifting all boats and benefiting society as a result?
link |
Yeah, I think there's two things I'd want to say on that.
link |
So one is what your comparative advantages,
link |
what your strengths are when it comes to career.
link |
That's obviously super important because there's lots of career paths
link |
I would be terrible at if I thought being an artist was the best thing one could do.
link |
Well, I'd be doomed, just really quite astonishingly bad.
link |
And so I do think, at least within the realm of things that could plausibly be very high impact,
link |
choose the thing that you think you're going to be able to really be passionate at
link |
and excel at over the long term.
link |
Then on this question of should one just do that in an unrestricted way
link |
and not even think about what the most important problems are.
link |
I do think that in a kind of perfectly designed society, that might well be the case.
link |
That would be a society where we've corrected all market failures,
link |
we've internalized all externalities,
link |
and then we've managed to set up incentives such that people just pursuing their own strengths
link |
is the best way of doing good.
link |
But we're very far from that society.
link |
So if one did that, then it would be very unlikely that you would focus
link |
on improving the lives of nonhuman animals that aren't participating in markets
link |
or ensuring the long run future goes well,
link |
where future people certainly aren't participating in markets
link |
or benefiting the global poor who do participate,
link |
but have so much less kind of power from a starting perspective
link |
that their views aren't accurately kind of represented by market forces too.
link |
So yeah, instead of pure definition capitalism,
link |
it just may very well ignore the people that are suffering the most,
link |
the white swath of them.
link |
So if you could allow me this line of thinking here.
link |
So I've listened to a lot of your conversations online.
link |
I find, if I can compliment you, they're very interesting conversations.
link |
Your conversation on Rogan, on Joe Rogan was really interesting,
link |
with Sam Harris and so on, whatever.
link |
There's a lot of stuff that's really good out there.
link |
And yet, when I look at the internet and I look at YouTube,
link |
which has certain mobs, certain swaths of right leaning folks,
link |
whom I dearly love.
link |
I love all people, especially people with ideas.
link |
They seem to not like you very much.
link |
So I don't understand why exactly.
link |
So my own sort of hypothesis is there is a right left divide
link |
that absurdly so caricatured in politics,
link |
at least in the United States.
link |
And maybe you're somehow pigeonholed into one of those sides.
link |
And maybe that's what it is.
link |
Maybe your message is somehow politicized.
link |
How do you make sense of that?
link |
Because you're extremely interesting.
link |
Like you got the comments I see on Joe Rogan.
link |
There's a bunch of negative stuff.
link |
And yet, if you listen to it, the conversation is fascinating.
link |
I'm not speaking, I'm not some kind of lefty extremist,
link |
but just it's a fascinating conversation.
link |
So why are you getting some small amount of hate?
link |
So I'm actually pretty glad that Effective Altruism has managed
link |
to stay relatively unpoliticized because I think the core message
link |
to just use some of your time and money to do as much good as possible
link |
to fight some of the problems in the world can be appealing
link |
across the political spectrum.
link |
And we do have a diversity of political viewpoints among people
link |
who have engaged in Effective Altruism.
link |
We do, however, do get some criticism from the left and the right.
link |
What's the criticism?
link |
Both would be interesting to hear.
link |
Yeah, so criticism from the left is that we're not focused enough
link |
on dismantling the capitalist system that they see as the root
link |
of most of the problems that we're talking about.
link |
And there I kind of disagree on partly the premise where I don't
link |
think relevant alternative systems would say to the animals or to the
link |
global poor or to the future generations kind of much better.
link |
And then also the tactics where I think there are particular ways
link |
we can change society that would massively benefit, you know,
link |
be massively beneficial on those things that don't go via dismantling
link |
like the entire system, which is perhaps a million times harder to do.
link |
Then criticism on the right, there's definitely like in response
link |
to the Joe Rogan podcast.
link |
There definitely were a number of Ayn Rand fans who weren't keen
link |
on the idea of promoting altruism.
link |
There was a remarkable set of ideas.
link |
Just the idea that Effective Altruism was unmanly, I think, was
link |
driving a lot of criticism.
link |
Okay, so I love fighting.
link |
I've been in street fights my whole life.
link |
I'm as alpha in everything I do as it gets.
link |
And the fact that Joe Rogan said that I thought Scent of a Woman
link |
is a better movie than John Wick put me into this beta category
link |
amongst people who are like basically saying this, yeah, unmanly
link |
or it's not tough.
link |
It's not some principled view of strength that is represented
link |
So actually, so how do you think about this?
link |
Because to me, altruism, especially Effective Altruism, I don't
link |
know what the female version of that is, but on the male side, manly
link |
as fuck, if I may say so.
link |
So how do you think about that kind of criticism?
link |
I think people who would make that criticism are just occupying
link |
a like state of mind that I think is just so different from my
link |
state of mind that I kind of struggle to maybe even understand it
link |
where if something's manly or unmanly or feminine or unfeminine,
link |
I'm like, I don't care.
link |
Like, is it the right thing to do or the wrong thing to do?
link |
So let me put it not in terms of man or woman.
link |
I don't think that's useful, but I think there's a notion of acting
link |
out of fear as opposed to out of principle and strength.
link |
Here's something that I do feel as an intuition and that I think
link |
drives some people who do find Canvaean Land attractive and so on
link |
as a philosophy, which is a kind of taking control of your own
link |
life and having power over how you're steering your life and not
link |
kind of kowtowing to others, you know, really thinking things through.
link |
I find like that set of ideas just very compelling and inspirational.
link |
I actually think of effect of altruism has really, you know, that
link |
side of my personality.
link |
It's like scratch that itch where you are just not taking the kind
link |
of priorities that society is giving you as granted.
link |
Instead, you're choosing to act in accordance with the priorities
link |
that you think are most important in the world.
link |
And often that involves then doing quite unusual things from a
link |
societal perspective, like donating a large chunk of your earnings
link |
or working on these weird issues about AI and so on that other
link |
people might not understand.
link |
Yeah, I think that's a really gutsy thing to do.
link |
That is taking control.
link |
That's at least at this stage.
link |
I mean, that's you taking ownership, not of just yourself, but
link |
your presence in this world that's full of suffering and saying
link |
as opposed to being paralyzed by that notion is taking control
link |
and saying I could do something.
link |
Yeah, I mean, that's really powerful.
link |
But I mean, sort of the one thing I personally hate too about the
link |
left currently that I think those folks to detect is the social
link |
signaling. When you look at yourself, sort of late at night, would
link |
you do everything you're doing in terms of effective altruism if
link |
your name, because you're quite popular, but if your name was
link |
totally unattached to it, so if it was in secret.
link |
Yeah, I mean, I think I would.
link |
To be honest, I think the kind of popularity is like, you know,
link |
it's mixed bag, but there are serious costs.
link |
And I don't particularly, I don't like love it.
link |
Like, it means you get all these people calling you a cuck on
link |
It's like not the most fun thing.
link |
But you also get a lot of sort of brownie points for doing good
link |
But I think my ideal life, I would be like in some library solving
link |
logic puzzles all day and I'd like really be like learning maths
link |
So you have a like good body of friends and so on.
link |
So your instinct for effective altruism is something deep.
link |
It's not one that is communicating
link |
socially. It's more in your heart.
link |
You want to do good for the world.
link |
Yeah, I mean, so we can look back to early giving what we can.
link |
So, you know, we're setting this up, me and Toby.
link |
And I really thought that doing this would be a big hit to my
link |
academic career because I was now spending, you know, at that time
link |
more than half my time setting up this nonprofit at the crucial
link |
time when you should be like producing your best academic work
link |
And it was also the case at the time.
link |
It was kind of like the Toby order club.
link |
You know, he was he was the most popular.
link |
There's this personal interest story about him and his plans
link |
donate and sorry to interrupt but Toby was donating a large
link |
amount. Can you tell just briefly what he was doing?
link |
Yeah, so he made this public commitment to give everything
link |
he earned above 20,000 pounds per year to the most effective
link |
causes. And even as a graduate student, he was still donating
link |
about 15, 20% of his income, which is so quite significant
link |
given that graduate students are not known for being super
link |
That's right. And when we launched Giving What We Can, the
link |
media just loved this as like a personal interest story.
link |
So the story about him and his pledge was the most, yeah, it
link |
was actually the most popular news story of the day.
link |
And we kind of ran the same story a year later and it was
link |
the most popular news story of the day a year later too.
link |
And so it really was kind of several years before then I
link |
was also kind of giving more talks and starting to do more
link |
writing and then especially with, you know, I wrote this book
link |
Doing Good Better that then there started to be kind of attention
link |
and so on. But deep inside your own relationship with effective
link |
altruism was, I mean, it had nothing to do with the publicity.
link |
Did you see yourself?
link |
How did the publicity connect with it?
link |
Yeah, I mean, that's kind of what I'm saying is I think the
link |
publicity came like several years afterwards.
link |
I mean, at the early stage when we set up Giving What We Can,
link |
it was really just every person we get to pledge 10% is, you
link |
know, something like $100,000 over their lifetime.
link |
And so it was just we had started with 23 members, every single
link |
person was just this like kind of huge accomplishment.
link |
And at the time, I just really thought, you know, maybe over
link |
time we'll have a hundred members and that'll be like amazing.
link |
Whereas now we have, you know, over four thousand and one and
link |
a half billion dollars pledged.
link |
That's just unimaginable to me at the time when I was first kind
link |
of getting this, you know, getting the stuff off the ground.
link |
So can we talk about poverty and the biggest problems that you
link |
think in the near term effective altruism can attack in each
link |
one. So poverty obviously is a huge one.
link |
Yeah. How can we help?
link |
So poverty, absolutely this huge problem.
link |
700 million people in extreme poverty living in less than two
link |
dollars per day where that's what that means is what two dollars
link |
would buy in the US.
link |
So think about that.
link |
It's like some rice, maybe some beans.
link |
It's very, you know, really not much.
link |
And at the same time, we can do an enormous amount to improve
link |
the lives of people in extreme poverty.
link |
So the things that we tend to focus on interventions in global
link |
health and that's for a couple of few reasons.
link |
One is like global health just has this amazing track record
link |
life expectancy globally is up 50% relative to 60 or 70 years
link |
ago. We've eradicated smallpox that's which killed 2 million
link |
lives every year almost eradicated polio.
link |
Second is that we just have great data on what works when it
link |
comes to global health.
link |
So we just know that bed nets protect children from prevent
link |
them from dying from malaria.
link |
And then the third is just that's extremely cost effective.
link |
So it costs $5 to buy one bed net, protects two children for
link |
two years against malaria.
link |
If you spend about $3,000 on bed nets, then statistically
link |
speaking, you're going to save a child's life.
link |
And there are other interventions too.
link |
And so given the people in such suffering and we have this
link |
opportunity to, you know, do such huge good for such low cost.
link |
Well, yeah, why not?
link |
So the individual.
link |
So for me today, if I wanted to look at poverty, how would
link |
I help? And I wanted to say, I think donating 10% of your
link |
income is a very interesting idea or some percentage or some
link |
setting a bar and sort of sticking to it.
link |
How do we then take the step towards the effective part?
link |
So you've conveyed some notions, but who do you give the
link |
So GiveWell, this organization I mentioned, well, it makes
link |
charity recommendations and some of its top recommendations.
link |
So Against Malaria Foundation is this organization that buys
link |
and distributes these insecticide seeded bed nets.
link |
And then it has a total of seven charities that it recommends
link |
very highly. So that recommendation, is it almost like a star
link |
of approval or is there some metrics?
link |
So what are the ways that GiveWell conveys that this is a
link |
great charity organization?
link |
So GiveWell is looking at metrics and it's trying to compare
link |
charities ultimately in the number of lives that you can save
link |
or an equivalent benefit.
link |
So one of the charities it recommends is GiveDirectly, which
link |
simply just transfers cash to the poorest families where poor
link |
family will get a cash transfer of $1,000 and they kind of
link |
regard that as the baseline intervention because it's so simple
link |
and people, you know, they know what to do with how to benefit
link |
themselves. That's quite powerful, by the way.
link |
So before GiveWell, before the Effective Altruism Movement, was
link |
there, I imagine there's a huge amount of corruption, funny
link |
enough, in charity organizations or misuse of money.
link |
So there was nothing like GiveWell before that?
link |
I mean, there were some.
link |
So, I mean, the charity corruption, I mean, obviously
link |
there's some, I don't think it's a huge issue.
link |
They're also just focusing on the long things. Prior to GiveWell,
link |
there were some organizations like Charity Navigator, which
link |
were more aimed at worrying about corruption and so on.
link |
So they weren't saying, these are the charities where you're
link |
going to do the most good. Instead, it was like, how good
link |
are the charities financials?
link |
How good is its health?
link |
Are they transparent? And yeah, so that would be more useful
link |
for weeding out some of those worst charities.
link |
So GiveWell has just taken a step further, sort of in this
link |
21st century of data.
link |
It's actually looking at the effective part.
link |
Yeah. So it's like, you know, if you know the wire cutter for
link |
if you want to buy a pair of headphones, they will just look
link |
at all the headphones and be like, these are the best headphones
link |
That's the idea with GiveWell.
link |
So do you think there's a bar of what suffering is?
link |
And do you think one day we can eradicate suffering in our
link |
Let's talk humans for now. Talk humans.
link |
But in general, yeah, actually.
link |
So there's a colleague of mine calling the term abolitionism
link |
for the idea that we should just be trying to abolish
link |
suffering. And in the long run, I mean, I don't expect to
link |
anytime soon, but I think we can.
link |
I think that would require, you know, quite change, quite
link |
drastic changes to the way society is structured and perhaps
link |
even the, you know, the human, in fact, even changes to human
link |
nature. But I do think that suffering whenever it occurs
link |
is bad and we should want it to not occur.
link |
So there's a line.
link |
There's a gray area between suffering.
link |
So I romanticize some aspects of suffering.
link |
There's a gray line between struggle, gray area between
link |
struggle and suffering.
link |
So one, do we want to eradicate all struggle in the world?
link |
So there's an idea, you know, that the human condition
link |
inherently has suffering in it and it's a creative force.
link |
It's a struggle of our lives and we somehow grow from that.
link |
How do you think about, how do you think about that?
link |
I agree that's true.
link |
So, you know, often, you know, great artists can be also
link |
suffering from, you know, major health conditions or depression
link |
and so on. They come from abusive parents.
link |
Most great artists, I think, come from abusive parents.
link |
Yeah, that seems to be at least commonly the case, but I
link |
want to distinguish between suffering as being instrumentally
link |
good, you know, it causes people to produce good things and
link |
whether it's intrinsically good and I think intrinsically
link |
And so if we can produce these, you know, great achievements
link |
via some other means where, you know, if we look at the
link |
scientific enterprise, we've produced incredible things
link |
often from people who aren't suffering, have, you know,
link |
pretty good lives.
link |
They're just, they're driven instead of, you know, being
link |
pushed by a certain sort of anguish.
link |
They're being driven by intellectual curiosity.
link |
If we can instead produce a society where it's all cavet
link |
and no stick, that's better from my perspective.
link |
Yeah, but I'm going to disagree with the notion that that's
link |
possible, but I would say most of the suffering in the world
link |
is not productive.
link |
So I would dream of effective altruism curing that suffering.
link |
Yeah, but then I would say that there is some suffering that
link |
is productive that we want to keep the because but that's
link |
not even the focus of because most of the suffering is just
link |
absurd and needs to be eliminated.
link |
So let's not even romanticize this usual notion I have,
link |
but nevertheless struggle has some kind of inherent value
link |
that to me at least, you're right.
link |
There's some elements of human nature that also have to
link |
be modified in order to cure all suffering.
link |
Yeah, I mean, there's an interesting question of whether
link |
So at the moment, you know, most of the time we're kind
link |
of neutral and then we burn ourselves and that's negative
link |
and that's really good that we get that negative signal
link |
because it means we won't burn ourselves again.
link |
There's a question like could you design agents humans such
link |
that you're not hovering around the zero level you're hovering
link |
Yeah, and then you touch the flame and you're like, oh no,
link |
you're just slightly worse bliss.
link |
Yeah, but that's really bad compared to the bliss you
link |
were normally in so that you can have like a gradient of
link |
bliss instead of like pain and pleasure on that point.
link |
I think it's a really important point on the experience
link |
of suffering the relative nature of it.
link |
Maybe having grown up in the Soviet Union were quite poor
link |
by any measure and when I when I was in my childhood,
link |
but it didn't feel like you're poor because everybody around
link |
you were poor there's a and then in America, I feel I for
link |
the first time begin to feel poor.
link |
Yeah, because of the road there's different.
link |
There's some cultural aspects to it that really emphasize
link |
that it's good to be rich.
link |
And then there's just the notion that there is a lot of
link |
income inequality and therefore you experience that inequality.
link |
That's where suffering go.
link |
Do you so what do you think about the inequality of suffering
link |
that that we have to think about do you think we have to
link |
think about that as part of effective altruism?
link |
Yeah, I think we're just things vary in terms of whether
link |
you get benefits or costs from them just in relative terms
link |
or in absolute terms.
link |
So a lot of the time yeah, there's this hedonic treadmill
link |
where if you get you know, there's money is useful because
link |
it helps you buy things or good for you because it helps
link |
you buy things, but there's also a status component too
link |
and that status component is kind of zero sum if you were
link |
saying like in Russia, you know, no one else felt poor
link |
because everyone around you is poor.
link |
Whereas now you've got this these other people who are
link |
you know super rich and maybe that makes you feel.
link |
You know less good about yourself.
link |
There are some other things however, which are just
link |
intrinsically good or bad.
link |
So commuting for example, it's just people hate it.
link |
It doesn't really change knowing the other people are
link |
commuting to doesn't make it any any kind of less bad,
link |
but it's sort of to push back on that for a second.
link |
I mean, yes, but also if some people were, you know on
link |
horseback your commute on the train might feel a lot better.
link |
Yeah, you know the there is a relative Nick.
link |
I mean everybody's complaining about society today forgetting
link |
it's forgetting how much better is the better angels of
link |
our nature how the technologies improve fundamentally
link |
improving most of the world's lives.
link |
Yeah, and actually there's some psychological research
link |
on the well being benefits of volunteering where people
link |
who volunteer tend to just feel happier about their lives
link |
and one of the suggested explanations is it because it
link |
extends your reference class.
link |
So no longer you comparing yourself to the Joneses who
link |
have their slightly better car because you realize that
link |
you know people in much worse conditions than you and
link |
so now, you know your life doesn't seem so bad.
link |
That's actually on the psychological level.
link |
One of the fundamental benefits of effective altruism.
link |
Yeah is is I mean, I guess it's the altruism part of
link |
effective altruism is exposing yourself to the suffering
link |
in the world allows you to be more.
link |
Yeah happier and actually allows you in the sort of
link |
meditative introspective way realize that you don't need
link |
most of the wealth you have to to be happy.
link |
I mean, I think effective options have been this huge
link |
benefit for me and I really don't think that if I had
link |
more money that I was living on that that would change
link |
my level of well being at all.
link |
Whereas engaging in something that I think is meaningful
link |
that I think is stealing humanity in a positive direction.
link |
That's extremely rewarding.
link |
And so yeah, I mean despite my best attempts at sacrifice.
link |
Um, I don't you know, I think I've actually ended up
link |
happier as a result of engaging in effective altruism
link |
than I would have done.
link |
That's such an interesting idea.
link |
Yeah, so let's let's talk about animal welfare.
link |
Sure, easy question. What is consciousness?
link |
Yeah, especially as it has to do with the capacity to
link |
suffer. I think there seems to be a connection between
link |
how conscious something is the amount of consciousness
link |
and stability to suffer and that all comes into play
link |
about us thinking how much suffering there's in the
link |
world with regard to animals.
link |
So how do you think about animal welfare and consciousness?
link |
Well consciousness easy question.
link |
Um, yeah, I mean, I think we don't have a good understanding
link |
My best guess is it's got and by consciousness.
link |
I'm meaning what it is feels like to be you the subjective
link |
experience that's seems to be different from everything
link |
else we know about in the world.
link |
Yeah, I think it's clear.
link |
It's very poorly understood at the moment.
link |
I think it has something to do with information processing.
link |
So the fact that the brain is a computer or something
link |
So that would mean that very advanced AI could be conscious
link |
of information processors in general could be conscious
link |
with some suitable complexity, but that also some suitable
link |
It's a question whether greater complexity creates some
link |
kind of greater consciousness which relates to animals.
link |
Is there if it's an information processing system and it's
link |
smaller and smaller is an ant less conscious than a cow
link |
less conscious than a monkey.
link |
Yeah, and again this super hard question, but I think my
link |
best guess is yes, like if you if I think well consciousness,
link |
it's not some magical thing that appears out of nowhere.
link |
It's not you know, Descartes thought it was just comes in
link |
from this other realm and then enters through the pineal
link |
gland in your brain and that's kind of soul and it's conscious.
link |
So it's got something to do with what's going on in your
link |
A chicken has one three hundredth of the size of the brain
link |
that you have ants.
link |
I don't know how small it is.
link |
Maybe it's a millionth the size my best guess which I may
link |
well be wrong about because this is so hard is that in some
link |
relevant sense the chicken is experiencing consciousness
link |
to a less degree than the human and the ants significantly
link |
I don't think it's as little as three hundredth as much.
link |
I think there's everyone who's ever seen a chicken that's
link |
there's evolutionary reasons for thinking that like the
link |
ability to feel pain comes on the scene relatively early
link |
on and we have lots of our brain that's dedicated stuff
link |
that doesn't seem to have to do in anything to do with
link |
consciousness language processing and so on.
link |
So it seems like the easy so there's a lot of complicated
link |
questions there that we can't ask the animals about but
link |
it seems that there is easy questions in terms of suffering
link |
which is things like factory farming that could be addressed.
link |
Yeah, is that is that the lowest hanging fruit?
link |
If I may use crude terms here of animal welfare.
link |
I think that's the lowest hanging fruit.
link |
So at the moment we kill we raise and kill about 50 billion
link |
animals every year.
link |
So how many 50 billion in?
link |
Yeah, so for every human on the planet several times that
link |
number of being killed and the vast majority of them are
link |
raised in factory farms where basically whatever your view
link |
on animals, I think you should agree even if you think well,
link |
maybe it's not bad to kill an animal.
link |
Maybe if the animal was raised in good conditions, that's
link |
just not the empirical reality.
link |
The empirical reality is that they are kept in incredible
link |
They are de beaked or detailed without an aesthetic, you
link |
know chickens often peck each other to death other like
link |
otherwise because of them such stress.
link |
It's really, you know, I think when a chicken gets killed
link |
that's the best thing that happened to the chicken in the
link |
course of its life and it's also completely unnecessary.
link |
This is in order to save, you know a few pence for the price
link |
of meat or price of eggs and we have indeed found it's also
link |
just inconsistent with consumer preference as well people
link |
who buy the products if they could they all they when you
link |
do surveys are extremely against suffering in factory farms.
link |
It's just they don't appreciate how bad it is and you know,
link |
just tend to go with easy options.
link |
And so then the best the most effective programs I know of
link |
at the moment are nonprofits that go to companies and work
link |
with companies to get them to take a pledge to cut certain
link |
sorts of animal products like eggs from cage confinement
link |
out of their supply chain.
link |
And it's now the case that the top 50 food retailers and
link |
fast food companies have all made these kind of cage free
link |
pledges and when you do the numbers you get the conclusion
link |
that every dollar you're giving to these nonprofits result
link |
in hundreds of chickens being spared from cage confinement.
link |
And then they're working to other other types of animals
link |
other products too.
link |
So is that the most effective way to do in have a ripple
link |
effect essentially it's supposed to directly having regulation
link |
from on top that says you can't do this.
link |
So I would be more open to the regulation approach, but
link |
at least in the US there's quite intense regulatory capture
link |
from the agricultural industry.
link |
And so attempts that we've seen to try and change regulation
link |
have it's been a real uphill struggle.
link |
There are some examples of ballot initiatives where the
link |
people have been able to vote in a ballot to say we want
link |
to ban eggs from cage conditions and that's been huge.
link |
That's been really good, but beyond that it's much more
link |
limited. So I've been really interested in the idea of
link |
hunting in general and wild animals and seeing nature as
link |
a form of cruelty that I am ethically more okay with.
link |
Okay, just from my perspective and then I read about wild
link |
animal suffering that I'm just I'm just giving you the
link |
kind of yeah notion of how I felt because animal because
link |
animal factory farming is so bad.
link |
Yeah that living in the woods seem good.
link |
Yeah, and yet when you actually start to think about it
link |
all I mean all of the animals in the animal world the
link |
living in like terrible poverty, right?
link |
Yeah, so you have all the medical conditions all of that.
link |
I mean they're living horrible lives.
link |
It could be improved.
link |
That's a really interesting notion that I think may not
link |
even be useful to talk about because factory farming is
link |
such a big thing to focus on.
link |
Yeah, but it's nevertheless an interesting notion to think
link |
of all the animals in the wild as suffering in the same
link |
way that humans in poverty are suffering.
link |
Yeah, I mean and often even worse so many animals we
link |
produce by our selection.
link |
So you have a very large number of children in the expectation
link |
that only a small number survive.
link |
And so for those animals almost all of them just live short
link |
lives where they starve to death.
link |
So yeah, there's huge amounts of suffering in nature that
link |
I don't think we should you know pretend that it's this kind
link |
of wonderful paradise for most animals.
link |
Yeah, their life is filled with hunger and fear and disease.
link |
Yeah, I did agree with you entirely that when it comes
link |
to focusing on animal welfare, we should focus in factory
link |
farming, but we also yeah should be aware to the reality
link |
of what life for most animals is like.
link |
So let's talk about a topic I've talked a lot about and
link |
you've actually quite eloquently talked about which is the
link |
third priority that effective altruism considers is really
link |
important is existential risks.
link |
Yeah, when you think about the existential risks that
link |
are facing our civilization, what's before us?
link |
What concerns you?
link |
What should we be thinking about from in the especially
link |
from an effective altruism perspective?
link |
Great. So the reason I started getting concerned about
link |
this was thinking about future generations where the key
link |
idea is just well future people matter morally.
link |
There are vast numbers of future people.
link |
If we don't cause our own extinction, there's no reason
link |
why civilization might not last a million years.
link |
I mean we last as long as a typical mammalian species
link |
or a billion years is when the Earth is no longer habitable
link |
or if we can take to the stars then perhaps it's trillions
link |
of years beyond that.
link |
So the future could be very big indeed and it seems like
link |
we're potentially very early on in civilization.
link |
Then the second idea is just well, maybe there are things
link |
that are going to really derail that things that actually
link |
could prevent us from having this long wonderful civilization
link |
and instead could cause our own cause our own extinction
link |
or otherwise perhaps like lock ourselves into a very bad
link |
state. And what ways could that happen?
link |
Well causing our own extinction development of nuclear
link |
weapons in the 20th century at least put on the table
link |
that we now had weapons that were powerful enough that
link |
you could very significantly destroy society perhaps
link |
and all that nuclear war would cause a nuclear winter.
link |
Perhaps that would be enough for the human race to go
link |
Why do you think we haven't done it? Sorry to interrupt.
link |
Why do you think we haven't done it yet?
link |
Is it surprising to you that having, you know, always
link |
for the past few decades several thousand of active ready
link |
to launch nuclear weapons warheads and yet we have not
link |
launched them ever since the initial launch on Hiroshima
link |
I think it's a mix of luck.
link |
So I think it's definitely not inevitable that we haven't
link |
So John F. Kennedy, general Cuban Missile Crisis put the
link |
estimate of nuclear exchange between the US and USSR
link |
that somewhere between one and three and even so, you know,
link |
we really did come close.
link |
At the same time, I do think mutually assured destruction
link |
is a reason why people don't go to war.
link |
It would be, you know, why nuclear powers don't go to war.
link |
Do you think that holds if you can linger on that for a
link |
second, like my dad is a physicist amongst other things
link |
and he believes that nuclear weapons are actually just
link |
really hard to build which is one of the really big benefits
link |
of them currently so that you don't have it's very hard
link |
if you're crazy to build to acquire a nuclear weapon.
link |
So the mutually shared destruction really works when you
link |
talk seems to work better when it's nation states, when
link |
it's serious people, even if they're a little bit, you
link |
know, dictatorial and so on.
link |
Do you think this mutually sure destruction idea will
link |
carry how far will it carry us in terms of different kinds
link |
Oh, yeah, I think it's your point that nuclear weapons
link |
are very hard to build and relatively easy to control
link |
because you can control fissile material is a really
link |
important one and future technology that's equally destructive
link |
might not have those properties.
link |
So for example, if in the future people are able to design
link |
viruses, perhaps using a DNA printing kit that's on that,
link |
you know, one can just buy.
link |
In fact, there are companies in the process of creating
link |
home DNA printing kits. Well, then perhaps that's just
link |
totally democratized.
link |
Perhaps the power to reap huge destructive potential is
link |
in the hands of most people in the world or certainly
link |
most people with effort and then yeah, I no longer trust
link |
mutually assured destruction because some for some people
link |
the idea that they would die is just not a disincentive.
link |
There was a Japanese cult, for example.
link |
Ohm Shinrikyo in the 90s that had they what they believed
link |
was that Armageddon was coming if you died before Armageddon,
link |
you would get good karma.
link |
You wouldn't go to hell if you died during Armageddon.
link |
Maybe you would go to hell and they had a biological weapons
link |
program chemical weapons program when they were finally
link |
They hadn't stocks of southern gas that were sufficient to
link |
kill 4 million people engaged in multiple terrorist acts.
link |
If they had had the ability to print a virus at home,
link |
that would have been very scary.
link |
So it's not impossible to imagine groups of people that
link |
hold that kind of belief of death as suicide as a good
link |
thing for passage into the next world and so on and then
link |
connect them with some weapons then ideology and weaponry
link |
may create serious problems for us.
link |
Let me ask you a quick question on what do you think is
link |
the line between killing most humans and killing all humans?
link |
How hard is it to kill everybody?
link |
Yeah, have you thought about this?
link |
I've thought about it a bit.
link |
I think it is very hard to kill everybody.
link |
So in the case of let's say an all out nuclear exchange
link |
and let's say that leads to nuclear winter.
link |
We don't really know but we you know might well happen
link |
that would I think result in billions of deaths would
link |
it kill everybody?
link |
It's quite it's quite hard to see how that how it would
link |
kill everybody for a few reasons.
link |
One is just those are so many people.
link |
Yes, you know seven and a half billion people.
link |
So this bad event has to kill all you know, all almost
link |
Secondly live in such a diversity of locations.
link |
So a nuclear exchange or the virus that has to kill people
link |
who live in the coast of New Zealand which is going to
link |
be climatically much more stable than other areas in the
link |
world or people who are on submarines or who have access
link |
So there's a very like there's just like I'm sure there's
link |
like two guys in Siberia just badass.
link |
There's the just human nature somehow just perseveres.
link |
Yeah, and then the second thing is just if there's some
link |
catastrophic event people really don't want to die.
link |
So there's going to be like, you know, huge amounts of
link |
effort to ensure that it doesn't affect everyone.
link |
Have you thought about what it takes to rebuild a society
link |
with smaller smaller numbers like how big of a setback
link |
these kinds of things are?
link |
Yeah, so then that's something where there's real uncertainty
link |
I think where at some point you just lose genetic sufficient
link |
genetic diversity such that you can't come back.
link |
There's it's unclear how small that population is.
link |
But if you've only got say a thousand people or fewer
link |
than a thousand, then maybe that's small enough.
link |
What about human knowledge and then there's human knowledge.
link |
I mean, it's striking how short on geological timescales
link |
or evolutionary timescales the progress in or how quickly
link |
the progress in human knowledge has been like agriculture.
link |
We only invented in 10,000 BC cities were only, you know,
link |
3000 BC whereas typical mammal species is half a million
link |
years to a million years.
link |
Do you think it's inevitable in some sense agriculture
link |
everything that came the Industrial Revolution cars planes
link |
the internet that level of innovation you think is inevitable.
link |
I think given how quickly it arose.
link |
So in the case of agriculture, I think that was dependent
link |
So it was the kind of glacial period was over the earth
link |
warmed up a bit that made it much more likely that humans
link |
would develop agriculture when it comes to the Industrial
link |
Revolution. It's just you know, again only took a few thousand
link |
years from cities to Industrial Revolution if we think okay,
link |
we've gone back to this even let's say agricultural era,
link |
but there's no reason why we wouldn't go extinct in the
link |
coming tens of thousands of years or hundreds of thousands
link |
It seems just vet.
link |
It would be very surprising if we didn't rebound unless
link |
there's some special reason that makes things different.
link |
So perhaps we just have a much greater like disease burden
link |
now so HIV exists.
link |
It didn't exist before and perhaps that's kind of latent
link |
and you know and being suppressed by modern medicine
link |
and sanitation and so on but would be a much bigger problem
link |
for some, you know, utterly destroyed the society that
link |
was trying to rebound or there's just maybe there's something
link |
we don't know about.
link |
So another existential risk comes from the mysterious the
link |
beautiful artificial intelligence.
link |
So what what's the shape of your concerns about AI?
link |
I think there are quite a lot of concerns about AI and
link |
sometimes the different risks don't get distinguished enough.
link |
So the kind of classic worry most is closely associated
link |
with Nick Bostrom and Elias Joukowski is that we at some
link |
point move from having narrow AI systems to artificial
link |
general intelligence.
link |
You get this very fast feedback effect where AGI is able
link |
to build, you know, artificial intelligence helps you to
link |
build greater artificial intelligence.
link |
We have this one system that suddenly very powerful far
link |
more powerful than others than perhaps far more powerful
link |
than, you know, the rest of the world combined and then
link |
secondly, it has goals that are misaligned with human goals.
link |
And so it pursues its own goals.
link |
It realize, hey, there's this competition namely from humans.
link |
It would be better if we eliminated them in just the same
link |
way as homo sapiens eradicated the Neanderthals.
link |
In fact, it in fact killed off most large animals on the
link |
planet that walk the planet. So that's kind of one set of
link |
worries. I think that's not my I think these shouldn't
link |
be dismissed as science fiction.
link |
I think it's something we should be taking very seriously,
link |
but it's not the thing you visualize when you're concerned
link |
about the biggest near term.
link |
Yeah, I think it's I think it's like one possible scenario
link |
that would be astronomically bad.
link |
I think that other scenarios that would also be extremely
link |
bad comparably bad that are more likely to occur.
link |
So one is just we are able to control AI.
link |
So we're able to get it to do what we want it to do.
link |
And perhaps there's not like this fast takeoff of AI capabilities
link |
within a single system.
link |
It's distributed across many systems that do somewhat different
link |
things, but you do get very rapid economic and technological
link |
progress as a result that concentrates power into the hands
link |
of a very small number of individuals, perhaps a single
link |
dictator. And secondly, that single individual is or small
link |
group of individuals or single country is then able to like
link |
lock in their values indefinitely via transmitting those
link |
values to artificial systems that have no reason to die
link |
like, you know, their code is copyable.
link |
Perhaps, you know, Donald Trump or Xi Jinping creates their
link |
kind of AI progeny in their own image. And once you have
link |
a system that's once you have a society that's controlled
link |
by AI, you no longer have one of the main drivers of change
link |
historically, which is the fact that human lifespans are
link |
you know, only a hundred years give or take.
link |
So that's really interesting.
link |
So as opposed to sort of killing off all humans is locking
link |
in creating a hell on earth, basically a set of principles
link |
under which the society operates that's extremely undesirable.
link |
So everybody is suffering indefinitely.
link |
Or it doesn't, I mean, it also doesn't need to be hell on
link |
earth. It could just be the long values.
link |
So we talked at the very beginning about how I want to
link |
see this kind of diversity of different values and exploration
link |
so that we can just work out what is kind of morally like
link |
what is good, what is bad and then pursue the thing that's
link |
bad. So actually, so the idea of wrong values is actually
link |
probably the beautiful thing is there's no such thing as
link |
right and wrong values because we don't know the right
link |
answer. We just kind of have a sense of which value is more
link |
right, which is more wrong.
link |
So any kind of lock in makes a value wrong because it
link |
prevents exploration of this kind.
link |
Yeah, and just, you know, imagine if fascist value, you
link |
know, imagine if there was Hitler's utopia or Stalin's utopia
link |
or Donald Trump's or Xi Jinping's forever.
link |
Yeah, you know, how good or bad would that be compared
link |
to the best possible future we could create? And my suggestion
link |
is it would really suck compared to the best possible
link |
future we could create.
link |
And you're just one individual.
link |
There's some individuals for whom Donald Trump is perhaps
link |
the best possible future.
link |
And so that's the whole point of us individuals exploring
link |
the space together.
link |
Yeah, and what's trying to figure out which is the path
link |
that will make America great again.
link |
So how can effective altruism help?
link |
I mean, this is a really interesting notion they actually
link |
describing of artificial intelligence being used as extremely
link |
powerful technology in the hands of very few potentially
link |
one person to create some very undesirable effect.
link |
So as opposed to AI and again, the source of the undesirableness
link |
there is the human.
link |
Yeah, AI is just a really powerful tool.
link |
So whether it's that or whether AI's AGI just runs away
link |
from us completely.
link |
How as individuals, as people in the effective altruism
link |
movement, how can we think about something like this?
link |
I understand poverty and animal welfare, but this is a far
link |
out incredibly mysterious and difficult problem.
link |
Well, I think there's three paths as an individual.
link |
So if you're thinking about, you know, career paths you
link |
So one is going down the line of technical AI safety.
link |
So this is most relevant to the kind of AI winning AI taking
link |
over scenarios where this is just technical work on current
link |
machine learning systems often sometimes going more theoretical
link |
to on how we can ensure that an AI is able to learn human
link |
values and able to act in the way that you want it to act.
link |
And that's a pretty mainstream issue and approach in machine
link |
So, you know, we definitely need more people doing that.
link |
Second is on the policy side of things, which I think is
link |
even more important at the moment, which is how should developments
link |
in AI be managed on a political level?
link |
How can you ensure that the benefits of AI are very distributed?
link |
It's not being, power isn't being concentrated in the hands
link |
of a small set of individuals.
link |
How do you ensure that there aren't arms races between different
link |
AI companies that might result in them, you know, cutting corners
link |
with respect to safety.
link |
And so there the input as individuals who can have is this.
link |
We're not talking about money.
link |
We're talking about effort.
link |
We're talking about career choices.
link |
We're talking about career choice.
link |
Yeah, but then it is the case that supposing, you know, you're
link |
like, I've already decided my career.
link |
I'm doing something quite different.
link |
You can contribute with money too, where at the Center for Effective
link |
Altruism, we set up the Long Term Future Fund.
link |
So if you go on to effectivealtruism.org, you can donate where
link |
a group of individuals will then work out what's the highest value
link |
place they can donate to work on existential risk issues with
link |
a particular focus on AI.
link |
What's path number three?
link |
This was path number three.
link |
This is donations with the third option I was thinking of.
link |
And then, yeah, there are, you can also donate directly to organizations
link |
working on this, like Center for Human Compatible AI at Berkeley,
link |
Future of Humanity Institute at Oxford, or other organizations too.
link |
Does AI keep you up at night?
link |
This kind of concern?
link |
Yeah, it's kind of a mix where I think it's very likely things are
link |
going to go well. I think we're going to be able to solve these
link |
problems. I think that's by far the most likely outcome, at least
link |
By far the most likely.
link |
So if you look at all the trajectories running away from our
link |
current moment in the next hundred years, you see AI creating
link |
destructive consequences as a small subset of those possible
link |
Or at least, yeah, kind of eternal, destructive consequences.
link |
I think that being a small subset.
link |
At the same time, it still freaks me out.
link |
I mean, when we're talking about the entire future of civilization,
link |
then small probabilities, you know, 1% probability, that's terrifying.
link |
What do you think about Elon Musk's strong worry that we should
link |
be really concerned about existential risks of AI?
link |
Yeah, I mean, I think, you know, broadly speaking, I think he's
link |
I think if we talked, we would probably have very different
link |
probabilities on how likely it is that we're doomed.
link |
But again, when it comes to talking about the entire future of
link |
civilization, it doesn't really matter if it's 1% or if it's
link |
50%, we ought to be taking every possible safeguard we can to
link |
ensure that things go well rather than poorly.
link |
Last question, if you yourself could eradicate one problem from
link |
the world, what would that problem be?
link |
That's a great question.
link |
I don't know if I'm cheating in saying this, but I think the
link |
thing I would most want to change is just the fact that people
link |
don't actually care about ensuring the long run future goes well.
link |
People don't really care about future generations.
link |
They don't think about it.
link |
It's not part of their aims.
link |
In some sense, you're not cheating at all because in speaking
link |
the way you do, in writing the things you're writing, you're
link |
doing, you're addressing exactly this aspect.
link |
That is your input into the effective altruism movement.
link |
So for that, Will, thank you so much.
link |
It's an honor to talk to you.
link |
I really enjoyed it.
link |
Thanks so much for having me on.
link |
If that were the case, we'd probably be pretty generous.
link |
Next round's on me, but that's effectively the situation we're
link |
It's like a 99% off sale or buy one get 99 free.
link |
Might be the most amazing deal you'll see in your life.
link |
Thank you for listening and hope to see you next time.