back to index

William MacAskill: Effective Altruism | Lex Fridman Podcast #84


small model | large model

link |
00:00:00.000
The following is a conversation with William McCaskill.
link |
00:00:03.600
He's a philosopher, ethicist, and one of the originators of the effective altruism movement.
link |
00:00:09.300
His research focuses on the fundamentals of effective altruism,
link |
00:00:13.000
or the use of evidence and reason to help others by as much as possible with our time and money,
link |
00:00:19.500
with a particular concentration on how to act given moral uncertainty.
link |
00:00:24.400
He's the author of Doing Good, Better, Effective Altruism,
link |
00:00:28.600
and a Radical New Way to Make a Difference.
link |
00:00:31.200
He is a cofounder and the president of the Center of Effective Altruism, CEA,
link |
00:00:37.100
that encourages people to commit to donate at least 10% of their income to the most effective charities.
link |
00:00:43.900
He cofounded 80,000 Hours, which is a nonprofit that provides research and advice
link |
00:00:49.200
on how you can best make a difference through your career.
link |
00:00:52.600
This conversation was recorded before the outbreak of the coronavirus pandemic.
link |
00:00:57.800
For everyone feeling the medical, psychological, and financial burden of this crisis,
link |
00:01:02.300
I'm sending love your way.
link |
00:01:04.200
Stay strong. We're in this together. We'll beat this thing.
link |
00:01:09.100
This is the Artificial Intelligence Podcast.
link |
00:01:11.900
If you enjoy it, subscribe on YouTube, review it with five stars on Apple Podcast,
link |
00:01:16.200
support it on Patreon, or simply connect with me on Twitter at Lex Friedman, spelled F R I D M A N.
link |
00:01:23.100
As usual, I'll do one or two minutes of ads now,
link |
00:01:25.800
and never any ads in the middle that can break the flow of the conversation.
link |
00:01:29.700
I hope that works for you and doesn't hurt the listening experience.
link |
00:01:34.700
This show is presented by Cash App, the number one finance app in the App Store.
link |
00:01:39.000
When you get it, use code LEXPODCAST.
link |
00:01:42.100
Cash App lets you send money to friends, buy Bitcoin, and invest in the stock market with as little as $1.
link |
00:01:48.900
Since Cash App allows you to send and receive money digitally, peer to peer,
link |
00:01:52.800
and security in all digital transactions is very important,
link |
00:01:56.100
let me mention the PCI data security standard that Cash App is compliant with.
link |
00:02:01.300
I'm a big fan of standards for safety and security.
link |
00:02:04.300
PCI DSS is a good example of that,
link |
00:02:07.100
where a bunch of competitors got together and agreed
link |
00:02:10.000
that there needs to be a global standard around the security of transactions.
link |
00:02:14.400
Now, we just need to do the same for autonomous vehicles and AI systems in general.
link |
00:02:19.300
So again, if you get Cash App from the App Store or Google Play,
link |
00:02:22.600
and use the code LEXPODCAST, you get $10, and Cash App will also donate $10 to FIRST,
link |
00:02:28.800
an organization that is helping to advance robotics and STEM education for young people around the world.
link |
00:02:34.500
And now, here's my conversation with William McCaskill.
link |
00:02:39.100
What does utopia for humans and all life on Earth look like for you?
link |
00:02:43.500
That's a great question.
link |
00:02:45.400
What I want to say is that we don't know,
link |
00:02:49.200
and the utopia we want to get to is an indirect one that I call the long reflection.
link |
00:02:55.500
So, period of post scarcity, no longer have the kind of urgent problems we have today,
link |
00:03:01.200
but instead can spend, perhaps it's tens of thousands of years debating,
link |
00:03:06.200
engaging in ethical reflection in order, before we take any kind of drastic lock in,
link |
00:03:12.100
actions like spreading to the stars,
link |
00:03:14.500
and then we can figure out what is of kind of moral value.
link |
00:03:20.500
The long reflection, that's a really beautiful term.
link |
00:03:25.100
So, if we look at Twitter for just a second,
link |
00:03:29.600
do you think human beings are able to reflect in a productive way?
link |
00:03:37.300
I don't mean to make it sound bad,
link |
00:03:39.500
because there is a lot of fights and politics and division in our discourse.
link |
00:03:45.000
Maybe if you zoom out, it actually is civilized discourse.
link |
00:03:48.900
It might not feel like it, but when you zoom out.
link |
00:03:51.000
So, I don't want to say that Twitter is not civilized discourse.
link |
00:03:55.100
I actually believe it.
link |
00:03:56.100
It's more civilized than people give it credit for.
link |
00:03:58.400
But do you think the long reflection can actually be stable,
link |
00:04:03.600
where we as human beings with our descendant of eight brains
link |
00:04:08.400
would be able to sort of rationally discuss things together and arrive at ideas?
link |
00:04:13.100
I think, overall, we're pretty good at discussing things rationally,
link |
00:04:19.800
and at least in the earlier stages of our lives being open to many different ideas,
link |
00:04:28.500
and being able to be convinced and change our views.
link |
00:04:33.300
I think that Twitter is designed almost to bring out all the worst tendencies.
link |
00:04:38.800
So, if the long reflection were conducted on Twitter,
link |
00:04:43.200
maybe it would be better just not even to bother.
link |
00:04:46.200
But I think the challenge really is getting to a stage
link |
00:04:50.300
where we have a society that is as conducive as possible
link |
00:04:55.700
to rational reflection, to deliberation.
link |
00:04:59.000
I think we're actually very lucky to be in a liberal society
link |
00:05:04.000
where people are able to discuss a lot of ideas and so on.
link |
00:05:06.900
I think when we look to the future,
link |
00:05:08.100
that's not at all guaranteed that society would be like that,
link |
00:05:12.400
rather than a society where there's a fixed canon of values
link |
00:05:16.900
that are being imposed on all of society,
link |
00:05:20.600
and where you aren't able to question that.
link |
00:05:22.300
That would be very bad from my perspective,
link |
00:05:23.900
because it means we wouldn't be able to figure out what the truth is.
link |
00:05:27.900
I can already sense we're going to go down a million tangents,
link |
00:05:31.300
but what do you think is the...
link |
00:05:36.800
If Twitter is not optimal,
link |
00:05:38.700
what kind of mechanism in this modern age of technology
link |
00:05:43.300
can we design where the exchange of ideas could be both civilized and productive,
link |
00:05:49.300
and yet not be too constrained
link |
00:05:52.600
where there's rules of what you can say and can't say,
link |
00:05:55.300
which is, as you say, is not desirable,
link |
00:05:57.900
but yet not have some limits as to what can be said or not and so on?
link |
00:06:02.800
Do you have any ideas, thoughts on the possible future?
link |
00:06:05.700
Of course, nobody knows how to do it,
link |
00:06:07.200
but do you have thoughts of what a better Twitter might look like?
link |
00:06:10.900
I think that text based media are intrinsically going to be very hard
link |
00:06:16.200
to be conducive to rational discussion,
link |
00:06:20.000
because if you think about it from an informational perspective,
link |
00:06:24.100
if I just send you a text of less than,
link |
00:06:27.200
what is it now, 240 characters, 280 characters, I think,
link |
00:06:31.700
that's a tiny amount of information compared to, say, you and I talking now,
link |
00:06:36.100
where you have access to the words I say, which is the same as in text,
link |
00:06:40.100
but also my tone, also my body language,
link |
00:06:43.800
and we're very poorly designed to be able to assess...
link |
00:06:47.800
I have to read all of this context into anything you say,
link |
00:06:50.300
so maybe your partner sends you a text and has a full stop at the end.
link |
00:06:56.500
Are they mad at you?
link |
00:06:58.000
You don't know.
link |
00:06:58.600
You have to infer everything about this person's mental state
link |
00:07:02.400
from whether they put a full stop at the end of a text or not.
link |
00:07:04.700
Well, the flip side of that is it truly text that's the problem here,
link |
00:07:08.800
because there's a viral aspect to the text,
link |
00:07:14.700
where you could just post text nonstop.
link |
00:07:17.200
It's very immediate.
link |
00:07:19.800
The times before Twitter, before the internet,
link |
00:07:23.200
the way you would exchange texts is you would write books.
link |
00:07:28.500
And that, while it doesn't get body language, it doesn't get tone, it doesn't...
link |
00:07:33.200
so on, but it does actually boil down after some time of thinking,
link |
00:07:36.700
some editing, and so on, boil down ideas.
link |
00:07:40.000
So is the immediacy and the viral nature,
link |
00:07:45.600
which produces the outrage mobs and so on, the potential problem?
link |
00:07:49.400
I think that is a big issue.
link |
00:07:51.100
I think there's going to be this strong selection effect where
link |
00:07:56.200
something that provokes outrage, well, that's high arousal,
link |
00:07:59.000
you're more likely to retweet that,
link |
00:08:04.400
whereas kind of sober analysis is not as sexy, not as viral.
link |
00:08:08.800
I do agree that long form content is much better to productive discussion.
link |
00:08:16.400
In terms of the media that are very popular at the moment,
link |
00:08:19.400
I think that podcasting is great where your podcasts are two hours long,
link |
00:08:25.400
so they're much more in depth than Twitter are,
link |
00:08:28.900
and you are able to convey so much more nuance,
link |
00:08:33.500
so much more caveat, because it's an actual conversation.
link |
00:08:36.800
It's more like the sort of communication that we've evolved to do,
link |
00:08:40.200
rather than these very small little snippets of ideas that,
link |
00:08:44.900
when also combined with bad incentives,
link |
00:08:46.900
just clearly aren't designed for helping us get to the truth.
link |
00:08:49.800
It's kind of interesting that it's not just the length of the podcast medium,
link |
00:08:53.700
but it's the fact that it was started by people that don't give a damn about
link |
00:08:59.300
quote unquote demand, that there's a relaxed,
link |
00:09:05.100
sort of the style that Joe Rogan does,
link |
00:09:08.100
there's a freedom to express ideas
link |
00:09:12.800
in an unconstrained way that's very real.
link |
00:09:15.300
It's kind of funny that it feels so refreshingly real to us today,
link |
00:09:22.100
and I wonder what the future looks like.
link |
00:09:24.900
It's a little bit sad now that quite a lot of sort of more popular people
link |
00:09:29.700
are getting into podcasting,
link |
00:09:31.600
and they try to sort of create, they try to control it,
link |
00:09:37.300
they try to constrain it in different kinds of ways.
link |
00:09:40.200
People I love, like Conan O Brien and so on, different comedians,
link |
00:09:43.400
and I'd love to see where the real aspects of this podcasting medium persist,
link |
00:09:50.600
maybe in TV, maybe in YouTube,
link |
00:09:52.500
maybe Netflix is pushing those kind of ideas,
link |
00:09:55.600
and it's kind of, it's a really exciting word,
link |
00:09:58.400
that kind of sharing of knowledge.
link |
00:10:00.200
Yeah, I mean, I think it's a double edged sword
link |
00:10:02.100
as it becomes more popular and more profitable,
link |
00:10:04.300
where on the one hand you'll get a lot more creativity,
link |
00:10:08.400
people doing more interesting things with the medium,
link |
00:10:10.700
but also perhaps you get this place to the bottom
link |
00:10:12.700
where suddenly maybe it'll be hard to find good content on podcasts
link |
00:10:18.100
because it'll be so overwhelmed by the latest bit of viral outrage.
link |
00:10:24.300
So speaking of that, jumping on Effective Altruism for a second,
link |
00:10:31.100
so much of that internet content is funded by advertisements.
link |
00:10:36.200
Just in the context of Effective Altruism,
link |
00:10:39.800
we're talking about the richest companies in the world,
link |
00:10:44.100
they're funded by advertisements essentially,
link |
00:10:45.800
Google, that's their primary source of income.
link |
00:10:48.800
Do you see that as,
link |
00:10:51.000
do you have any criticism of that source of income?
link |
00:10:55.200
Do you see that source of money
link |
00:10:57.500
as a potentially powerful source of money that could be used,
link |
00:11:01.000
well, certainly could be used for good,
link |
00:11:03.200
but is there something bad about that source of money?
link |
00:11:05.900
I think there's significant worries with it,
link |
00:11:08.100
where it means that the incentives of the company
link |
00:11:13.200
might be quite misaligned with making people's lives better,
link |
00:11:20.600
where again, perhaps the incentives are towards increasing drama
link |
00:11:28.400
and debate on your social media feed
link |
00:11:32.300
in order that more people are going to be engaged,
link |
00:11:36.300
perhaps compulsively involved with the platform.
link |
00:11:42.200
Whereas there are other business models
link |
00:11:45.600
like having an opt in subscription service
link |
00:11:49.100
where perhaps they have other issues,
link |
00:11:51.500
but there's much more of an incentive to provide a product
link |
00:11:57.600
that its users are just really wanting,
link |
00:12:00.500
because now I'm paying for this product.
link |
00:12:02.900
I'm paying for this thing that I want to buy
link |
00:12:05.400
rather than I'm trying to use this thing
link |
00:12:09.200
and it's going to get a profit mechanism
link |
00:12:11.600
that is somewhat orthogonal to me
link |
00:12:13.600
actually just wanting to use the product.
link |
00:12:19.000
And so, I mean, in some cases it'll work better than others.
link |
00:12:23.000
I can imagine, I can in theory imagine Facebook
link |
00:12:27.100
having a subscription service,
link |
00:12:28.800
but I think it's unlikely to happen anytime soon.
link |
00:12:32.200
Well, it's interesting and it's weird
link |
00:12:34.200
now that you bring it up that it's unlikely.
link |
00:12:36.200
For example, I pay I think 10 bucks a month for YouTube Red
link |
00:12:41.000
and I don't think I get it much for that
link |
00:12:45.300
except just for no ads,
link |
00:12:50.200
but in general it's just a slightly better experience.
link |
00:12:52.900
And I would gladly, now I'm not wealthy,
link |
00:12:56.100
in fact I'm operating very close to zero dollars,
link |
00:12:59.200
but I would pay 10 bucks a month to Facebook
link |
00:13:01.800
and 10 bucks a month to Twitter
link |
00:13:04.000
for some kind of more control
link |
00:13:07.500
in terms of advertisements and so on.
link |
00:13:09.100
But the other aspect of that is data, personal data.
link |
00:13:13.700
People are really sensitive about this
link |
00:13:16.200
and I as one who hopes to one day
link |
00:13:20.700
create a company that may use people's data
link |
00:13:25.600
to do good for the world,
link |
00:13:27.500
wonder about this.
link |
00:13:28.900
One, the psychology of why people are so paranoid.
link |
00:13:32.300
Well, I understand why,
link |
00:13:33.300
but they seem to be more paranoid
link |
00:13:35.200
than is justified at times.
link |
00:13:37.700
And the other is how do you do it right?
link |
00:13:39.400
So it seems that Facebook is,
link |
00:13:43.500
it seems that Facebook is doing it wrong.
link |
00:13:47.300
That's certainly the popular narrative.
link |
00:13:49.500
It's unclear to me actually how wrong.
link |
00:13:53.000
Like I tend to give them more benefit of the doubt
link |
00:13:55.400
because it's a really hard thing to do right
link |
00:13:59.900
and people don't necessarily realize it,
link |
00:14:01.300
but how do we respect in your view people's privacy?
link |
00:14:05.900
Yeah, I mean in the case of how worried are people
link |
00:14:10.700
about using their data,
link |
00:14:12.300
I mean there's a lot of public debate
link |
00:14:15.200
and criticism about it.
link |
00:14:18.600
When we look at people's revealed preferences,
link |
00:14:22.100
people's continuing massive use
link |
00:14:24.200
of these sorts of services.
link |
00:14:27.600
It's not clear to me how much people really do care.
link |
00:14:30.500
Perhaps they care a bit,
link |
00:14:31.500
but they're happy to in effect kind of sell their data
link |
00:14:35.500
in order to be able to kind of use a certain service.
link |
00:14:37.500
That's a great term, revealed preferences.
link |
00:14:39.300
So these aren't preferences you self report in the survey.
link |
00:14:42.500
This is like your actions speak.
link |
00:14:44.500
Yeah, exactly.
link |
00:14:45.340
So you might say,
link |
00:14:46.500
oh yeah, I hate the idea of Facebook having my data.
link |
00:14:51.000
But then when it comes to it,
link |
00:14:52.700
you actually are willing to give that data in exchange
link |
00:14:55.600
for being able to use the service.
link |
00:15:00.400
And if that's the case,
link |
00:15:01.600
then I think unless we have some explanation
link |
00:15:05.300
about why there's some negative externality from that
link |
00:15:11.000
or why there's some coordination failure,
link |
00:15:15.800
or if there's something that consumers
link |
00:15:18.000
are just really misled about
link |
00:15:19.700
where they don't realize why giving away data like this
link |
00:15:23.100
is a really bad thing to do,
link |
00:15:27.400
then ultimately I kind of want to,
link |
00:15:30.800
you know, respect people's preferences.
link |
00:15:32.300
They can give away their data if they want.
link |
00:15:35.500
I think there's a big difference
link |
00:15:36.500
between companies use of data
link |
00:15:39.700
and governments having data where,
link |
00:15:43.600
you know, looking at the track record of history,
link |
00:15:45.800
governments knowing a lot about their people can be very bad
link |
00:15:51.600
if the government chooses to do bad things with it.
link |
00:15:55.000
And that's more worrying, I think.
link |
00:15:57.100
So let's jump into it a little bit.
link |
00:15:59.700
Most people know, but actually I, two years ago,
link |
00:16:03.900
had no idea what effective altruism was
link |
00:16:07.000
until I saw there was a cool looking event
link |
00:16:09.100
in an MIT group here.
link |
00:16:10.800
I think it's called the Effective Altruism Club or a group.
link |
00:16:17.900
I was like, what the heck is that?
link |
00:16:19.800
And one of my friends said,
link |
00:16:23.200
I mean, he said that they're just
link |
00:16:27.200
a bunch of eccentric characters.
link |
00:16:30.000
So I was like, hell yes, I'm in.
link |
00:16:31.600
So I went to one of their events
link |
00:16:32.800
and looked up what's it about.
link |
00:16:34.400
It's quite a fascinating philosophical
link |
00:16:37.000
and just a movement of ideas.
link |
00:16:38.900
So can you tell me what is effective altruism?
link |
00:16:42.600
Great, so the core of effective altruism
link |
00:16:44.800
is about trying to answer this question,
link |
00:16:46.500
which is how can I do as much good as possible
link |
00:16:49.400
with my scarce resources, my time and with my money?
link |
00:16:53.200
And then once we have our best guess answers to that,
link |
00:16:57.200
trying to take those ideas and put that into practice,
link |
00:17:00.200
and do those things that we believe will do the most good.
link |
00:17:03.000
And we're now a community of people,
link |
00:17:06.100
many thousands of us around the world,
link |
00:17:08.100
who really are trying to answer that question
link |
00:17:10.800
as best we can and then use our time and money
link |
00:17:13.100
to make the world better.
link |
00:17:15.200
So what's the difference between sort of
link |
00:17:18.600
classical general idea of altruism
link |
00:17:22.300
and effective altruism?
link |
00:17:24.700
So normally when people try to do good,
link |
00:17:28.300
they often just aren't so reflective about those attempts.
link |
00:17:34.100
So someone might approach you on the street
link |
00:17:36.300
asking you to give to charity.
link |
00:17:38.600
And if you're feeling altruistic,
link |
00:17:42.200
you'll give to the person on the street.
link |
00:17:44.400
Or if you think, oh, I wanna do some good in my life,
link |
00:17:48.100
you might volunteer at a local place.
link |
00:17:50.000
Or perhaps you'll decide, pursue a career
link |
00:17:52.900
where you're working in a field
link |
00:17:56.500
that's kind of more obviously beneficial
link |
00:17:58.200
like being a doctor or a nurse or a healthcare professional.
link |
00:18:02.300
But it's very rare that people apply the same level
link |
00:18:07.900
of rigor and analytical thinking
link |
00:18:11.800
to lots of other areas we think about.
link |
00:18:14.400
So take the case of someone approaching you on the street.
link |
00:18:16.400
Imagine if that person instead was saying,
link |
00:18:18.700
hey, I've got this amazing company.
link |
00:18:20.200
Do you want to invest in it?
link |
00:18:22.400
It would be insane.
link |
00:18:23.800
No one would ever think, oh, of course,
link |
00:18:25.500
I'm just a company like you'd think it was a scam.
link |
00:18:29.200
But somehow we don't have that same level of rigor
link |
00:18:31.300
when it comes to doing good,
link |
00:18:32.400
even though the stakes are more important
link |
00:18:34.600
when it comes to trying to help others
link |
00:18:36.100
than trying to make money for ourselves.
link |
00:18:38.500
Well, first of all, so there is a psychology
link |
00:18:40.700
at the individual level of doing good just feels good.
link |
00:18:46.200
And so in some sense, on that pure psychological part,
link |
00:18:51.700
it doesn't matter.
link |
00:18:52.900
In fact, you don't wanna know if it does good or not
link |
00:18:56.400
because most of the time it won't.
link |
00:19:01.500
So like in a certain sense,
link |
00:19:04.800
it's understandable why altruism
link |
00:19:06.900
without the effective part is so appealing
link |
00:19:09.800
to a certain population.
link |
00:19:11.300
By the way, let's zoom off for a second.
link |
00:19:15.300
Do you think most people, two questions.
link |
00:19:18.700
Do you think most people are good?
link |
00:19:20.900
And question number two is,
link |
00:19:22.200
do you think most people wanna do good?
link |
00:19:24.900
So are most people good?
link |
00:19:26.600
I think it's just super dependent
link |
00:19:28.000
on the circumstances that someone is in.
link |
00:19:31.700
I think that the actions people take
link |
00:19:34.800
and their moral worth is just much more dependent
link |
00:19:37.700
on circumstance than it is on someone's intrinsic character.
link |
00:19:41.900
So is there evil within all of us?
link |
00:19:43.800
It seems like with the better angels of our nature,
link |
00:19:47.900
there's a tendency of us as a society
link |
00:19:50.400
to tend towards good, less war.
link |
00:19:53.300
I mean, with all these metrics.
link |
00:19:56.200
Is that us becoming who we want to be
link |
00:20:00.100
or is that some kind of societal force?
link |
00:20:03.300
What's the nature versus nurture thing here?
link |
00:20:05.300
Yeah, so in that case, I just think,
link |
00:20:07.100
yeah, so violence has massively declined over time.
link |
00:20:10.600
I think that's a slow process of cultural evolution,
link |
00:20:14.200
institutional evolution such that now the incentives
link |
00:20:17.600
for you and I to be violent are very, very small indeed.
link |
00:20:21.700
In contrast, when we were hunter gatherers,
link |
00:20:23.700
the incentives were quite large.
link |
00:20:25.800
If there was someone who was potentially disturbing
link |
00:20:31.900
the social order and hunter gatherer setting,
link |
00:20:35.300
there was a very strong incentive to kill that person
link |
00:20:37.800
and people did and it was just the guarded 10% of deaths
link |
00:20:41.400
among hunter gatherers were murders.
link |
00:20:44.800
After hunter gatherers, when you have actual societies
link |
00:20:48.700
is when violence can probably go up
link |
00:20:51.300
because there's more incentive to do mass violence, right?
link |
00:20:54.300
To take over, conquer other people's lands
link |
00:20:58.800
and murder everybody in place and so on.
link |
00:21:01.200
Yeah, I mean, I think total death rate
link |
00:21:03.800
from human causes does go down,
link |
00:21:06.900
but you're right that if you're in a hunter gatherer situation
link |
00:21:10.400
you're kind of a group that you're part of is very small
link |
00:21:15.000
then you can't have massive wars
link |
00:21:17.300
that just massive communities don't exist.
link |
00:21:19.600
But anyway, the second question,
link |
00:21:21.300
do you think most people want to do good?
link |
00:21:23.400
Yeah, and then I think that is true for most people.
link |
00:21:26.100
I think you see that with the fact that most people donate,
link |
00:21:31.800
a large proportion of people volunteer.
link |
00:21:33.800
If you give people opportunities
link |
00:21:35.500
to easily help other people, they will take it.
link |
00:21:38.700
But at the same time,
link |
00:21:39.700
we're a product of our circumstances
link |
00:21:43.700
and if it were more socially awarded to be doing more good,
link |
00:21:47.400
if it were more socially awarded to do good effectively
link |
00:21:49.600
rather than not effectively,
link |
00:21:51.300
then we would see that behavior a lot more.
link |
00:21:55.100
So why should we do good?
link |
00:21:58.700
Yeah, my answer to this is
link |
00:22:01.400
there's no kind of deeper level of explanation.
link |
00:22:04.100
So my answer to kind of why should you do good is
link |
00:22:08.500
well, there is someone whose life is on the line,
link |
00:22:11.300
for example, whose life you can save
link |
00:22:13.700
via donating just actually a few thousand dollars
link |
00:22:17.800
to an effective nonprofit
link |
00:22:20.000
like the Against Malaria Foundation.
link |
00:22:21.800
That is a sufficient reason to do good.
link |
00:22:23.900
And then if you ask, well, why ought I to do that?
link |
00:22:27.000
I'm like, I just show you the same facts again.
link |
00:22:29.700
It's that fact that is the reason to do good.
link |
00:22:32.000
There's nothing more fundamental than that.
link |
00:22:34.600
I'd like to sort of make more concrete
link |
00:22:38.200
the thing we're trying to make better.
link |
00:22:41.000
So you just mentioned malaria.
link |
00:22:43.100
So there's a huge amount of suffering in the world.
link |
00:22:46.600
Are we trying to remove?
link |
00:22:50.000
So is ultimately the goal, not ultimately,
link |
00:22:53.500
but the first step is to remove the worst of the suffering.
link |
00:22:59.000
So there's some kind of threshold of suffering
link |
00:23:01.600
that we want to make sure does not exist in the world.
link |
00:23:06.400
Or do we really naturally want to take a much further step
link |
00:23:11.100
and look at things like income inequality?
link |
00:23:14.600
So not just getting everybody above a certain threshold,
link |
00:23:17.000
but making sure that there's some,
link |
00:23:21.500
that broadly speaking,
link |
00:23:23.600
there's less injustice in the world, unfairness,
link |
00:23:27.400
in some definition, of course,
link |
00:23:29.200
very difficult to define a fairness.
link |
00:23:31.200
Yeah, so the metric I use is how many people do we affect
link |
00:23:35.500
and by how much do we affect them?
link |
00:23:37.300
And so that can, often that means eliminating suffering,
link |
00:23:43.200
but it doesn't have to,
link |
00:23:44.200
could be helping promote a flourishing life instead.
link |
00:23:47.800
And so if I was comparing reducing income inequality
link |
00:23:53.000
or getting people from the very pits of suffering
link |
00:23:58.300
to a higher level,
link |
00:24:00.600
the question I would ask is just a quantitative one
link |
00:24:03.100
of just if I do this first thing or the second thing,
link |
00:24:06.200
how many people am I going to benefit
link |
00:24:08.100
and by how much am I going to benefit?
link |
00:24:10.000
Am I going to move that one person from kind of 10%,
link |
00:24:13.500
0% well being to 10% well being?
link |
00:24:17.200
Perhaps that's just not as good as moving a hundred people
link |
00:24:20.200
from 10% well being to 50% well being.
link |
00:24:22.800
And the idea is the diminishing returns is the idea of
link |
00:24:27.200
when you're in terrible poverty,
link |
00:24:32.800
then the $1 that you give goes much further
link |
00:24:38.200
than if you were in the middle class in the United States,
link |
00:24:40.700
for example.
link |
00:24:41.700
Absolutely.
link |
00:24:42.300
And this fact is really striking.
link |
00:24:44.500
So if you take even just quite a conservative estimate
link |
00:24:51.600
of how we are able to turn money into well being,
link |
00:24:56.900
the economists put it as like a log curve.
link |
00:25:00.100
That's the or steeper.
link |
00:25:02.000
But that means that any proportional increase
link |
00:25:04.600
in your income has the same impact on your well being.
link |
00:25:09.300
And so someone moving from $1,000 a year
link |
00:25:11.500
to $2,000 a year has the same impact
link |
00:25:15.800
as someone moving from $100,000 a year to $200,000 a year.
link |
00:25:20.600
And then when you combine that with the fact that we
link |
00:25:23.200
in middle class members of rich countries are 100 times richer
link |
00:25:28.700
than financial terms in the global poor,
link |
00:25:31.100
that means we can do a hundred times to benefit the poorest people
link |
00:25:33.700
in the world as we can to benefit people of our income level.
link |
00:25:37.600
And that's this astonishing fact.
link |
00:25:39.400
Yeah, it's quite incredible.
link |
00:25:40.900
A lot of these facts and ideas are just difficult to think about
link |
00:25:47.600
because there's an overwhelming amount of suffering in the world.
link |
00:25:56.000
And even acknowledging it is difficult.
link |
00:26:00.700
Not exactly sure why that is.
link |
00:26:02.300
I mean, I mean, it's difficult because you have to bring to mind,
link |
00:26:07.700
you know, it's an unpleasant experience thinking
link |
00:26:10.000
about other people's suffering.
link |
00:26:11.700
It's unpleasant to be empathizing with it, firstly.
link |
00:26:14.700
And then secondly, thinking about it means
link |
00:26:16.700
that maybe we'd have to change our lifestyles.
link |
00:26:19.000
And if you're very attached to the income that you've got,
link |
00:26:22.900
perhaps you don't want to be confronting ideas or arguments
link |
00:26:26.500
that might cause you to use some of that money to help others.
link |
00:26:31.400
So it's quite understandable in the psychological terms,
link |
00:26:34.600
even if it's not the right thing that we ought to be doing.
link |
00:26:38.100
So how can we do better?
link |
00:26:40.100
How can we be more effective?
link |
00:26:42.400
How does data help?
link |
00:26:44.400
Yeah, in general, how can we do better?
link |
00:26:47.500
It's definitely hard.
link |
00:26:48.800
And we have spent the last 10 years engaged in kind of some deep research projects,
link |
00:26:54.700
to try and answer kind of two questions.
link |
00:26:59.500
One is, of all the many problems the world is facing,
link |
00:27:02.500
what are the problems we ought to be focused on?
link |
00:27:04.700
And then within those problems that we judge to be kind of the most pressing,
link |
00:27:08.600
where we use this idea of focusing on problems that are the biggest in scale,
link |
00:27:13.200
that are the most tractable,
link |
00:27:15.600
where we can make the most progress on that problem,
link |
00:27:20.900
and that are the most neglected.
link |
00:27:23.800
Within them, what are the things that have the kind of best evidence,
link |
00:27:27.500
or we have the best guess, will do the most good.
link |
00:27:32.000
And so we have a bunch of organizations.
link |
00:27:34.500
So GiveWell, for example, is focused on global health and development,
link |
00:27:39.200
and has a list of seven top recommended charities.
link |
00:27:42.300
So the idea in general, and sorry to interrupt,
link |
00:27:44.600
is, so we'll talk about sort of poverty and animal welfare and existential risk.
link |
00:27:48.600
Those are all fascinating topics, but in general,
link |
00:27:52.200
the idea is there should be a group,
link |
00:27:56.200
sorry, there's a lot of groups that seek to convert money into good.
link |
00:28:04.100
And then you also on top of that want to have a accounting
link |
00:28:11.500
of how good they actually perform that conversion,
link |
00:28:15.900
how well they did in converting money to good.
link |
00:28:18.400
So ranking of these different groups,
link |
00:28:20.400
ranking these charities.
link |
00:28:24.000
So does that apply across basically all aspects of effective altruism?
link |
00:28:29.600
So there should be a group of people,
link |
00:28:31.700
and they should report on certain metrics of how well they've done,
link |
00:28:35.700
and you should only give your money to groups that do a good job.
link |
00:28:39.900
That's the core idea. I'd make two comments.
link |
00:28:43.500
One is just, it's not just about money.
link |
00:28:45.300
So we're also trying to encourage people to work in areas
link |
00:28:49.700
where they'll have the biggest impact.
link |
00:28:51.300
Absolutely.
link |
00:28:51.900
And in some areas, you know, they're really people heavy, but money poor.
link |
00:28:56.400
Other areas are kind of money rich and people poor.
link |
00:28:59.700
And so whether it's better to focus time or money depends on the cause area.
link |
00:29:05.200
And then the second is that you mentioned metrics,
link |
00:29:08.300
and while that's the ideal, and in some areas we do,
link |
00:29:12.300
we are able to get somewhat quantitative information
link |
00:29:15.100
about how much impact an area is having.
link |
00:29:18.900
That's not always true.
link |
00:29:20.200
For some of the issues, like you mentioned existential risks,
link |
00:29:23.800
well, we're not able to measure in any sort of precise way
link |
00:29:30.400
like how much progress we're making.
link |
00:29:32.400
And so you have to instead fall back on just rigorous argument and evaluation,
link |
00:29:38.500
even in the absence of data.
link |
00:29:41.000
So let's first sort of linger on your own story for a second.
link |
00:29:47.400
How do you yourself practice effective altruism in your own life?
link |
00:29:51.100
Because I think that's a really interesting place to start.
link |
00:29:54.700
So I've tried to build effective altruism into at least many components of my life.
link |
00:30:00.100
So on the donation side, my plan is to give away most of my income
link |
00:30:06.200
over the course of my life.
link |
00:30:07.500
I've set a bar I feel happy with and I just donate above that bar.
link |
00:30:12.400
So at the moment, I donate about 20% of my income.
link |
00:30:17.300
Then on the career side, I've also shifted kind of what I do,
link |
00:30:22.000
where I was initially planning to work on very esoteric topics
link |
00:30:28.400
in the philosophy of logic, philosophy of language,
link |
00:30:30.800
things that are intellectually extremely interesting,
link |
00:30:33.000
but the path by which they really make a difference to the world is,
link |
00:30:37.400
let's just say it's very unclear at best.
link |
00:30:40.600
And so I switched instead to researching ethics to actually just working
link |
00:30:44.600
on this question of how we can do as much good as possible.
link |
00:30:48.400
And then I've also spent a very large chunk of my life over the last 10 years
link |
00:30:53.300
creating a number of nonprofits who again in different ways
link |
00:30:56.400
are tackling this question of how we can do the most good
link |
00:31:00.000
and helping them to grow over time too.
link |
00:31:02.000
Yeah, we mentioned a few of them with the career selection, 80,000.
link |
00:31:06.600
80,000 hours.
link |
00:31:07.500
80,000 hours is a really interesting group.
link |
00:31:11.100
So maybe also just a quick pause on the origins of effective altruism
link |
00:31:18.400
because you paint a picture who the key figures are,
link |
00:31:21.700
including yourself in the effective altruism movement today.
link |
00:31:26.800
Yeah, there are two main strands that kind of came together
link |
00:31:31.300
to form the effective altruism movement.
link |
00:31:34.800
So one was two philosophers, myself and Toby Ord at Oxford,
link |
00:31:40.400
and we had been very influenced by the work of Peter Singer,
link |
00:31:43.900
an Australian model philosopher who had argued for many decades
link |
00:31:47.200
that because one can do so much good at such little cost to oneself,
link |
00:31:52.900
we have an obligation to give away most of our income
link |
00:31:55.600
to benefit those in extreme poverty,
link |
00:31:58.200
just in the same way that we have an obligation to run in
link |
00:32:01.300
and save a child from drowning in a shallow pond
link |
00:32:04.700
if it would just ruin your suit that cost a few thousand dollars.
link |
00:32:10.300
And we set up Giving What We Can in 2009,
link |
00:32:13.100
which is encouraging people to give at least 10% of their income
link |
00:32:16.000
to the most effective charities.
link |
00:32:18.100
And the second main strand was the formation of GiveWell,
link |
00:32:21.300
which was originally based in New York and started in about 2007.
link |
00:32:26.300
And that was set up by Holden Carnovsky and Elie Hassenfeld,
link |
00:32:30.200
who were two hedge fund dudes who were making good money
link |
00:32:36.200
and thinking, well, where should I donate?
link |
00:32:38.400
And in the same way as if they wanted to buy a product for themselves,
link |
00:32:42.100
they would look at Amazon reviews.
link |
00:32:44.100
They were like, well, what are the best charities?
link |
00:32:46.600
Found they just weren't really good answers to that question,
link |
00:32:49.300
certainly not that they were satisfied with.
link |
00:32:51.200
And so they formed GiveWell in order to try and work out
link |
00:32:56.200
what are those charities where they can have the biggest impact.
link |
00:32:59.000
And then from there and some other influences,
link |
00:33:02.200
kind of community grew and spread.
link |
00:33:05.200
Can we explore the philosophical and political space
link |
00:33:08.600
that effective altruism occupies a little bit?
link |
00:33:11.400
So from the little and distant in my own lifetime
link |
00:33:16.600
that I've read of Ayn Rand's work, Ayn Rand's philosophy of objectivism,
link |
00:33:21.100
espouses, and it's interesting to put her philosophy in contrast
link |
00:33:26.700
with effective altruism.
link |
00:33:28.000
So it espouses selfishness as the best thing you can do.
link |
00:33:34.000
But it's not actually against altruism.
link |
00:33:37.600
It's just you have that choice, but you should be selfish in it, right?
link |
00:33:43.100
Or not, maybe you can disagree here.
link |
00:33:44.800
But so it can be viewed as the complete opposite of effective altruism
link |
00:33:49.500
or it can be viewed as similar because the word effective is really interesting.
link |
00:33:55.500
Because if you want to do good, then you should be damn good at doing good, right?
link |
00:34:02.200
I think that would fit within the morality that's defined by objectivism.
link |
00:34:08.600
So do you see a connection between these two philosophies
link |
00:34:11.100
and other perhaps in this complicated space of beliefs
link |
00:34:17.300
that effective altruism is positioned as opposing or aligned with?
link |
00:34:24.700
I would definitely say that objectivism, Ayn Rand's philosophy,
link |
00:34:27.800
is a philosophy that's quite fundamentally opposed to effective altruism.
link |
00:34:33.100
In which way?
link |
00:34:34.300
Insofar as Ayn Rand's philosophy is about championing egoism
link |
00:34:38.600
and saying that I'm never quite sure whether the philosophy is meant to say
link |
00:34:42.800
that just you ought to do whatever will best benefit yourself,
link |
00:34:47.300
that's ethical egoism, no matter what the consequences are.
link |
00:34:50.700
Or second, if there's this alternative view, which is, well,
link |
00:34:55.200
you ought to try and benefit yourself because that's actually the best way
link |
00:34:59.800
of benefiting society.
link |
00:35:02.900
Certainly, in Atlas Shalaguchi is presenting her philosophy
link |
00:35:07.500
as a way that's actually going to bring about a flourishing society.
link |
00:35:12.000
And if it's the former, then well, effective altruism is all about promoting
link |
00:35:16.100
the idea of altruism and saying, in fact,
link |
00:35:18.800
we ought to really be trying to help others as much as possible.
link |
00:35:22.400
So it's opposed there.
link |
00:35:23.900
And then on the second side, I would just dispute the empirical premise.
link |
00:35:28.700
It would seem, given the major problems in the world today,
link |
00:35:31.500
it would seem like this remarkable coincidence,
link |
00:35:34.200
quite suspicious, one might say, if benefiting myself was actually
link |
00:35:38.500
the best way to bring about a better world.
link |
00:35:41.100
So on that point, and I think that connects also with career selection
link |
00:35:46.800
that we'll talk about, but let's consider not objectives, but capitalism.
link |
00:35:53.100
And the idea that you focusing on the thing that you are damn good at,
link |
00:36:00.900
whatever that is, may be the best thing for the world.
link |
00:36:05.800
Part of it is also mindset, right?
link |
00:36:09.800
The thing I love is robots.
link |
00:36:13.200
So maybe I should focus on building robots
link |
00:36:17.500
and never even think about the idea of effective altruism,
link |
00:36:22.500
which is kind of the capitalist notion.
link |
00:36:25.000
Is there any value in that idea in just finding the thing you're good at
link |
00:36:28.500
and maximizing your productivity in this world
link |
00:36:31.500
and thereby sort of lifting all boats and benefiting society as a result?
link |
00:36:38.600
Yeah, I think there's two things I'd want to say on that.
link |
00:36:41.000
So one is what your comparative advantages,
link |
00:36:43.500
what your strengths are when it comes to career.
link |
00:36:45.400
That's obviously super important because there's lots of career paths
link |
00:36:49.300
I would be terrible at if I thought being an artist was the best thing one could do.
link |
00:36:53.800
Well, I'd be doomed, just really quite astonishingly bad.
link |
00:36:59.300
And so I do think, at least within the realm of things that could plausibly be very high impact,
link |
00:37:05.800
choose the thing that you think you're going to be able to really be passionate at
link |
00:37:11.500
and excel at over the long term.
link |
00:37:15.100
Then on this question of should one just do that in an unrestricted way
link |
00:37:19.000
and not even think about what the most important problems are.
link |
00:37:22.300
I do think that in a kind of perfectly designed society, that might well be the case.
link |
00:37:27.800
That would be a society where we've corrected all market failures,
link |
00:37:31.500
we've internalized all externalities,
link |
00:37:34.700
and then we've managed to set up incentives such that people just pursuing their own strengths
link |
00:37:41.700
is the best way of doing good.
link |
00:37:44.100
But we're very far from that society.
link |
00:37:46.200
So if one did that, then it would be very unlikely that you would focus
link |
00:37:53.000
on improving the lives of nonhuman animals that aren't participating in markets
link |
00:37:57.900
or ensuring the long run future goes well,
link |
00:38:00.000
where future people certainly aren't participating in markets
link |
00:38:03.200
or benefiting the global poor who do participate,
link |
00:38:06.400
but have so much less kind of power from a starting perspective
link |
00:38:11.000
that their views aren't accurately kind of represented by market forces too.
link |
00:38:18.900
Got it.
link |
00:38:19.500
So yeah, instead of pure definition capitalism,
link |
00:38:22.700
it just may very well ignore the people that are suffering the most,
link |
00:38:27.000
the white swath of them.
link |
00:38:28.900
So if you could allow me this line of thinking here.
link |
00:38:35.400
So I've listened to a lot of your conversations online.
link |
00:38:38.800
I find, if I can compliment you, they're very interesting conversations.
link |
00:38:46.000
Your conversation on Rogan, on Joe Rogan was really interesting,
link |
00:38:50.100
with Sam Harris and so on, whatever.
link |
00:38:55.600
There's a lot of stuff that's really good out there.
link |
00:38:58.000
And yet, when I look at the internet and I look at YouTube,
link |
00:39:01.600
which has certain mobs, certain swaths of right leaning folks,
link |
00:39:08.200
whom I dearly love.
link |
00:39:12.500
I love all people, especially people with ideas.
link |
00:39:19.000
They seem to not like you very much.
link |
00:39:22.700
So I don't understand why exactly.
link |
00:39:26.200
So my own sort of hypothesis is there is a right left divide
link |
00:39:31.100
that absurdly so caricatured in politics,
link |
00:39:36.100
at least in the United States.
link |
00:39:38.300
And maybe you're somehow pigeonholed into one of those sides.
link |
00:39:42.700
And maybe that's what it is.
link |
00:39:46.600
Maybe your message is somehow politicized.
link |
00:39:49.600
Yeah, I mean.
link |
00:39:50.800
How do you make sense of that?
link |
00:39:52.200
Because you're extremely interesting.
link |
00:39:54.400
Like you got the comments I see on Joe Rogan.
link |
00:39:58.600
There's a bunch of negative stuff.
link |
00:40:00.400
And yet, if you listen to it, the conversation is fascinating.
link |
00:40:03.200
I'm not speaking, I'm not some kind of lefty extremist,
link |
00:40:08.300
but just it's a fascinating conversation.
link |
00:40:10.100
So why are you getting some small amount of hate?
link |
00:40:13.800
So I'm actually pretty glad that Effective Altruism has managed
link |
00:40:18.100
to stay relatively unpoliticized because I think the core message
link |
00:40:24.000
to just use some of your time and money to do as much good as possible
link |
00:40:27.100
to fight some of the problems in the world can be appealing
link |
00:40:30.100
across the political spectrum.
link |
00:40:31.700
And we do have a diversity of political viewpoints among people
link |
00:40:35.500
who have engaged in Effective Altruism.
link |
00:40:38.800
We do, however, do get some criticism from the left and the right.
link |
00:40:42.700
Oh, interesting.
link |
00:40:43.400
What's the criticism?
link |
00:40:44.400
Both would be interesting to hear.
link |
00:40:45.800
Yeah, so criticism from the left is that we're not focused enough
link |
00:40:49.300
on dismantling the capitalist system that they see as the root
link |
00:40:54.100
of most of the problems that we're talking about.
link |
00:40:58.500
And there I kind of disagree on partly the premise where I don't
link |
00:41:06.800
think relevant alternative systems would say to the animals or to the
link |
00:41:11.900
global poor or to the future generations kind of much better.
link |
00:41:15.400
And then also the tactics where I think there are particular ways
link |
00:41:19.000
we can change society that would massively benefit, you know,
link |
00:41:22.400
be massively beneficial on those things that don't go via dismantling
link |
00:41:27.600
like the entire system, which is perhaps a million times harder to do.
link |
00:41:30.900
Then criticism on the right, there's definitely like in response
link |
00:41:34.900
to the Joe Rogan podcast.
link |
00:41:36.900
There definitely were a number of Ayn Rand fans who weren't keen
link |
00:41:40.000
on the idea of promoting altruism.
link |
00:41:43.000
There was a remarkable set of ideas.
link |
00:41:46.900
Just the idea that Effective Altruism was unmanly, I think, was
link |
00:41:50.700
driving a lot of criticism.
link |
00:41:52.100
Okay, so I love fighting.
link |
00:41:56.700
I've been in street fights my whole life.
link |
00:41:58.900
I'm as alpha in everything I do as it gets.
link |
00:42:04.100
And the fact that Joe Rogan said that I thought Scent of a Woman
link |
00:42:08.700
is a better movie than John Wick put me into this beta category
link |
00:42:14.600
amongst people who are like basically saying this, yeah, unmanly
link |
00:42:20.700
or it's not tough.
link |
00:42:21.500
It's not some principled view of strength that is represented
link |
00:42:26.900
by a spasmodic.
link |
00:42:27.700
So actually, so how do you think about this?
link |
00:42:31.200
Because to me, altruism, especially Effective Altruism, I don't
link |
00:42:41.400
know what the female version of that is, but on the male side, manly
link |
00:42:44.800
as fuck, if I may say so.
link |
00:42:46.300
So how do you think about that kind of criticism?
link |
00:42:51.500
I think people who would make that criticism are just occupying
link |
00:42:55.400
a like state of mind that I think is just so different from my
link |
00:42:59.200
state of mind that I kind of struggle to maybe even understand it
link |
00:43:03.300
where if something's manly or unmanly or feminine or unfeminine,
link |
00:43:07.700
I'm like, I don't care.
link |
00:43:08.700
Like, is it the right thing to do or the wrong thing to do?
link |
00:43:11.000
So let me put it not in terms of man or woman.
link |
00:43:14.700
I don't think that's useful, but I think there's a notion of acting
link |
00:43:20.100
out of fear as opposed to out of principle and strength.
link |
00:43:26.700
Yeah.
link |
00:43:27.400
So, okay.
link |
00:43:28.400
Yeah.
link |
00:43:28.600
Here's something that I do feel as an intuition and that I think
link |
00:43:33.500
drives some people who do find Canvaean Land attractive and so on
link |
00:43:38.200
as a philosophy, which is a kind of taking control of your own
link |
00:43:43.000
life and having power over how you're steering your life and not
link |
00:43:51.300
kind of kowtowing to others, you know, really thinking things through.
link |
00:43:55.500
I find like that set of ideas just very compelling and inspirational.
link |
00:43:59.800
I actually think of effect of altruism has really, you know, that
link |
00:44:04.300
side of my personality.
link |
00:44:05.300
It's like scratch that itch where you are just not taking the kind
link |
00:44:11.400
of priorities that society is giving you as granted.
link |
00:44:14.100
Instead, you're choosing to act in accordance with the priorities
link |
00:44:19.300
that you think are most important in the world.
link |
00:44:21.200
And often that involves then doing quite unusual things from a
link |
00:44:29.400
societal perspective, like donating a large chunk of your earnings
link |
00:44:33.400
or working on these weird issues about AI and so on that other
link |
00:44:38.100
people might not understand.
link |
00:44:39.200
Yeah, I think that's a really gutsy thing to do.
link |
00:44:42.000
That is taking control.
link |
00:44:43.400
That's at least at this stage.
link |
00:44:45.600
I mean, that's you taking ownership, not of just yourself, but
link |
00:44:53.300
your presence in this world that's full of suffering and saying
link |
00:44:58.500
as opposed to being paralyzed by that notion is taking control
link |
00:45:02.300
and saying I could do something.
link |
00:45:03.600
Yeah, I mean, that's really powerful.
link |
00:45:05.900
But I mean, sort of the one thing I personally hate too about the
link |
00:45:09.500
left currently that I think those folks to detect is the social
link |
00:45:15.500
signaling. When you look at yourself, sort of late at night, would
link |
00:45:21.600
you do everything you're doing in terms of effective altruism if
link |
00:45:25.900
your name, because you're quite popular, but if your name was
link |
00:45:29.300
totally unattached to it, so if it was in secret.
link |
00:45:32.400
Yeah, I mean, I think I would.
link |
00:45:34.800
To be honest, I think the kind of popularity is like, you know,
link |
00:45:39.800
it's mixed bag, but there are serious costs.
link |
00:45:43.300
And I don't particularly, I don't like love it.
link |
00:45:45.600
Like, it means you get all these people calling you a cuck on
link |
00:45:49.700
Joe Rogan.
link |
00:45:50.300
It's like not the most fun thing.
link |
00:45:51.900
But you also get a lot of sort of brownie points for doing good
link |
00:45:56.100
for the world.
link |
00:45:56.700
Yeah, you do.
link |
00:45:57.800
But I think my ideal life, I would be like in some library solving
link |
00:46:02.200
logic puzzles all day and I'd like really be like learning maths
link |
00:46:06.500
and so on.
link |
00:46:07.100
So you have a like good body of friends and so on.
link |
00:46:10.600
So your instinct for effective altruism is something deep.
link |
00:46:14.500
It's not one that is communicating
link |
00:46:19.100
socially. It's more in your heart.
link |
00:46:21.300
You want to do good for the world.
link |
00:46:23.200
Yeah, I mean, so we can look back to early giving what we can.
link |
00:46:26.700
So, you know, we're setting this up, me and Toby.
link |
00:46:31.800
And I really thought that doing this would be a big hit to my
link |
00:46:36.500
academic career because I was now spending, you know, at that time
link |
00:46:40.100
more than half my time setting up this nonprofit at the crucial
link |
00:46:43.700
time when you should be like producing your best academic work
link |
00:46:46.500
and so on.
link |
00:46:47.000
And it was also the case at the time.
link |
00:46:49.700
It was kind of like the Toby order club.
link |
00:46:52.900
You know, he was he was the most popular.
link |
00:46:55.300
There's this personal interest story about him and his plans
link |
00:46:57.700
donate and sorry to interrupt but Toby was donating a large
link |
00:47:02.600
amount. Can you tell just briefly what he was doing?
link |
00:47:05.100
Yeah, so he made this public commitment to give everything
link |
00:47:09.000
he earned above 20,000 pounds per year to the most effective
link |
00:47:13.900
causes. And even as a graduate student, he was still donating
link |
00:47:17.400
about 15, 20% of his income, which is so quite significant
link |
00:47:21.600
given that graduate students are not known for being super
link |
00:47:24.100
wealthy.
link |
00:47:24.500
That's right. And when we launched Giving What We Can, the
link |
00:47:28.500
media just loved this as like a personal interest story.
link |
00:47:31.500
So the story about him and his pledge was the most, yeah, it
link |
00:47:38.500
was actually the most popular news story of the day.
link |
00:47:40.500
And we kind of ran the same story a year later and it was
link |
00:47:43.400
the most popular news story of the day a year later too.
link |
00:47:45.800
And so it really was kind of several years before then I
link |
00:47:53.100
was also kind of giving more talks and starting to do more
link |
00:47:55.400
writing and then especially with, you know, I wrote this book
link |
00:47:58.000
Doing Good Better that then there started to be kind of attention
link |
00:48:02.100
and so on. But deep inside your own relationship with effective
link |
00:48:06.300
altruism was, I mean, it had nothing to do with the publicity.
link |
00:48:12.300
Did you see yourself?
link |
00:48:14.400
How did the publicity connect with it?
link |
00:48:16.900
Yeah, I mean, that's kind of what I'm saying is I think the
link |
00:48:19.700
publicity came like several years afterwards.
link |
00:48:22.900
I mean, at the early stage when we set up Giving What We Can,
link |
00:48:25.400
it was really just every person we get to pledge 10% is, you
link |
00:48:30.200
know, something like $100,000 over their lifetime.
link |
00:48:34.800
That's huge.
link |
00:48:35.800
And so it was just we had started with 23 members, every single
link |
00:48:39.600
person was just this like kind of huge accomplishment.
link |
00:48:43.200
And at the time, I just really thought, you know, maybe over
link |
00:48:46.500
time we'll have a hundred members and that'll be like amazing.
link |
00:48:49.700
Whereas now we have, you know, over four thousand and one and
link |
00:48:52.900
a half billion dollars pledged.
link |
00:48:54.100
That's just unimaginable to me at the time when I was first kind
link |
00:48:59.100
of getting this, you know, getting the stuff off the ground.
link |
00:49:02.000
So can we talk about poverty and the biggest problems that you
link |
00:49:10.100
think in the near term effective altruism can attack in each
link |
00:49:15.300
one. So poverty obviously is a huge one.
link |
00:49:18.900
Yeah. How can we help?
link |
00:49:21.400
Great.
link |
00:49:22.200
Yeah.
link |
00:49:22.400
So poverty, absolutely this huge problem.
link |
00:49:24.800
700 million people in extreme poverty living in less than two
link |
00:49:28.800
dollars per day where that's what that means is what two dollars
link |
00:49:33.800
would buy in the US.
link |
00:49:34.900
So think about that.
link |
00:49:36.900
It's like some rice, maybe some beans.
link |
00:49:38.800
It's very, you know, really not much.
link |
00:49:40.600
And at the same time, we can do an enormous amount to improve
link |
00:49:45.600
the lives of people in extreme poverty.
link |
00:49:47.400
So the things that we tend to focus on interventions in global
link |
00:49:51.800
health and that's for a couple of few reasons.
link |
00:49:54.600
One is like global health just has this amazing track record
link |
00:49:58.100
life expectancy globally is up 50% relative to 60 or 70 years
link |
00:50:02.700
ago. We've eradicated smallpox that's which killed 2 million
link |
00:50:06.600
lives every year almost eradicated polio.
link |
00:50:08.900
Second is that we just have great data on what works when it
link |
00:50:13.800
comes to global health.
link |
00:50:14.600
So we just know that bed nets protect children from prevent
link |
00:50:20.500
them from dying from malaria.
link |
00:50:21.600
And then the third is just that's extremely cost effective.
link |
00:50:26.300
So it costs $5 to buy one bed net, protects two children for
link |
00:50:30.800
two years against malaria.
link |
00:50:31.900
If you spend about $3,000 on bed nets, then statistically
link |
00:50:35.600
speaking, you're going to save a child's life.
link |
00:50:37.300
And there are other interventions too.
link |
00:50:40.900
And so given the people in such suffering and we have this
link |
00:50:45.300
opportunity to, you know, do such huge good for such low cost.
link |
00:50:50.800
Well, yeah, why not?
link |
00:50:52.000
So the individual.
link |
00:50:53.300
So for me today, if I wanted to look at poverty, how would
link |
00:50:59.400
I help? And I wanted to say, I think donating 10% of your
link |
00:51:03.700
income is a very interesting idea or some percentage or some
link |
00:51:07.000
setting a bar and sort of sticking to it.
link |
00:51:09.400
How do we then take the step towards the effective part?
link |
00:51:14.700
So you've conveyed some notions, but who do you give the
link |
00:51:19.200
money to? Yeah.
link |
00:51:21.300
So GiveWell, this organization I mentioned, well, it makes
link |
00:51:25.900
charity recommendations and some of its top recommendations.
link |
00:51:29.300
So Against Malaria Foundation is this organization that buys
link |
00:51:34.200
and distributes these insecticide seeded bed nets.
link |
00:51:37.300
And then it has a total of seven charities that it recommends
link |
00:51:41.400
very highly. So that recommendation, is it almost like a star
link |
00:51:46.100
of approval or is there some metrics?
link |
00:51:48.800
So what are the ways that GiveWell conveys that this is a
link |
00:51:54.600
great charity organization?
link |
00:51:57.200
Yeah.
link |
00:51:58.000
So GiveWell is looking at metrics and it's trying to compare
link |
00:52:01.700
charities ultimately in the number of lives that you can save
link |
00:52:05.800
or an equivalent benefit.
link |
00:52:07.500
So one of the charities it recommends is GiveDirectly, which
link |
00:52:11.700
simply just transfers cash to the poorest families where poor
link |
00:52:17.100
family will get a cash transfer of $1,000 and they kind of
link |
00:52:20.800
regard that as the baseline intervention because it's so simple
link |
00:52:24.600
and people, you know, they know what to do with how to benefit
link |
00:52:27.300
themselves. That's quite powerful, by the way.
link |
00:52:30.400
So before GiveWell, before the Effective Altruism Movement, was
link |
00:52:34.600
there, I imagine there's a huge amount of corruption, funny
link |
00:52:39.000
enough, in charity organizations or misuse of money.
link |
00:52:42.100
Yeah.
link |
00:52:43.500
So there was nothing like GiveWell before that?
link |
00:52:46.200
No.
link |
00:52:46.500
I mean, there were some.
link |
00:52:47.700
So, I mean, the charity corruption, I mean, obviously
link |
00:52:49.500
there's some, I don't think it's a huge issue.
link |
00:52:53.800
They're also just focusing on the long things. Prior to GiveWell,
link |
00:52:57.700
there were some organizations like Charity Navigator, which
link |
00:53:00.900
were more aimed at worrying about corruption and so on.
link |
00:53:04.600
So they weren't saying, these are the charities where you're
link |
00:53:07.300
going to do the most good. Instead, it was like, how good
link |
00:53:10.300
are the charities financials?
link |
00:53:12.700
How good is its health?
link |
00:53:14.100
Are they transparent? And yeah, so that would be more useful
link |
00:53:16.800
for weeding out some of those worst charities.
link |
00:53:19.200
So GiveWell has just taken a step further, sort of in this
link |
00:53:21.900
21st century of data.
link |
00:53:25.200
It's actually looking at the effective part.
link |
00:53:28.700
Yeah. So it's like, you know, if you know the wire cutter for
link |
00:53:32.100
if you want to buy a pair of headphones, they will just look
link |
00:53:34.200
at all the headphones and be like, these are the best headphones
link |
00:53:36.400
you can buy.
link |
00:53:37.800
That's the idea with GiveWell.
link |
00:53:39.300
Okay.
link |
00:53:39.700
So do you think there's a bar of what suffering is?
link |
00:53:44.400
And do you think one day we can eradicate suffering in our
link |
00:53:47.800
world? Yeah.
link |
00:53:49.400
Amongst humans?
link |
00:53:50.200
Let's talk humans for now. Talk humans.
link |
00:53:52.300
But in general, yeah, actually.
link |
00:53:55.000
So there's a colleague of mine calling the term abolitionism
link |
00:54:00.800
for the idea that we should just be trying to abolish
link |
00:54:02.800
suffering. And in the long run, I mean, I don't expect to
link |
00:54:06.100
anytime soon, but I think we can.
link |
00:54:09.100
I think that would require, you know, quite change, quite
link |
00:54:11.900
drastic changes to the way society is structured and perhaps
link |
00:54:15.400
even the, you know, the human, in fact, even changes to human
link |
00:54:21.600
nature. But I do think that suffering whenever it occurs
link |
00:54:25.400
is bad and we should want it to not occur.
link |
00:54:28.300
So there's a line.
link |
00:54:31.500
There's a gray area between suffering.
link |
00:54:33.900
Now I'm Russian.
link |
00:54:34.700
So I romanticize some aspects of suffering.
link |
00:54:38.600
There's a gray line between struggle, gray area between
link |
00:54:41.400
struggle and suffering.
link |
00:54:42.700
So one, do we want to eradicate all struggle in the world?
link |
00:54:51.800
So there's an idea, you know, that the human condition
link |
00:54:59.900
inherently has suffering in it and it's a creative force.
link |
00:55:04.800
It's a struggle of our lives and we somehow grow from that.
link |
00:55:09.400
How do you think about, how do you think about that?
link |
00:55:13.600
I agree that's true.
link |
00:55:15.600
So, you know, often, you know, great artists can be also
link |
00:55:20.300
suffering from, you know, major health conditions or depression
link |
00:55:24.300
and so on. They come from abusive parents.
link |
00:55:26.600
Most great artists, I think, come from abusive parents.
link |
00:55:29.900
Yeah, that seems to be at least commonly the case, but I
link |
00:55:33.200
want to distinguish between suffering as being instrumentally
link |
00:55:37.100
good, you know, it causes people to produce good things and
link |
00:55:40.900
whether it's intrinsically good and I think intrinsically
link |
00:55:43.300
it's always bad.
link |
00:55:44.500
And so if we can produce these, you know, great achievements
link |
00:55:48.000
via some other means where, you know, if we look at the
link |
00:55:52.200
scientific enterprise, we've produced incredible things
link |
00:55:55.000
often from people who aren't suffering, have, you know,
link |
00:55:59.300
pretty good lives.
link |
00:56:00.000
They're just, they're driven instead of, you know, being
link |
00:56:02.700
pushed by a certain sort of anguish.
link |
00:56:04.200
They're being driven by intellectual curiosity.
link |
00:56:06.200
If we can instead produce a society where it's all cavet
link |
00:56:11.300
and no stick, that's better from my perspective.
link |
00:56:14.000
Yeah, but I'm going to disagree with the notion that that's
link |
00:56:17.000
possible, but I would say most of the suffering in the world
link |
00:56:21.600
is not productive.
link |
00:56:23.100
So I would dream of effective altruism curing that suffering.
link |
00:56:28.200
Yeah, but then I would say that there is some suffering that
link |
00:56:30.800
is productive that we want to keep the because but that's
link |
00:56:35.600
not even the focus of because most of the suffering is just
link |
00:56:38.800
absurd and needs to be eliminated.
link |
00:56:44.100
So let's not even romanticize this usual notion I have,
link |
00:56:47.700
but nevertheless struggle has some kind of inherent value
link |
00:56:51.800
that to me at least, you're right.
link |
00:56:56.900
There's some elements of human nature that also have to
link |
00:56:59.400
be modified in order to cure all suffering.
link |
00:57:01.900
Yeah, I mean, there's an interesting question of whether
link |
00:57:03.900
it's possible.
link |
00:57:04.500
So at the moment, you know, most of the time we're kind
link |
00:57:07.000
of neutral and then we burn ourselves and that's negative
link |
00:57:10.300
and that's really good that we get that negative signal
link |
00:57:13.200
because it means we won't burn ourselves again.
link |
00:57:15.800
There's a question like could you design agents humans such
link |
00:57:21.100
that you're not hovering around the zero level you're hovering
link |
00:57:23.600
it like bliss.
link |
00:57:24.600
Yeah, and then you touch the flame and you're like, oh no,
link |
00:57:26.700
you're just slightly worse bliss.
link |
00:57:28.300
Yeah, but that's really bad compared to the bliss you
link |
00:57:31.800
were normally in so that you can have like a gradient of
link |
00:57:34.200
bliss instead of like pain and pleasure on that point.
link |
00:57:37.300
I think it's a really important point on the experience
link |
00:57:41.200
of suffering the relative nature of it.
link |
00:57:46.500
Maybe having grown up in the Soviet Union were quite poor
link |
00:57:52.100
by any measure and when I when I was in my childhood,
link |
00:57:58.100
but it didn't feel like you're poor because everybody around
link |
00:58:01.000
you were poor there's a and then in America, I feel I for
link |
00:58:06.100
the first time begin to feel poor.
link |
00:58:09.200
Yeah.
link |
00:58:09.500
Yeah, because of the road there's different.
link |
00:58:11.900
There's some cultural aspects to it that really emphasize
link |
00:58:15.200
that it's good to be rich.
link |
00:58:17.200
And then there's just the notion that there is a lot of
link |
00:58:19.500
income inequality and therefore you experience that inequality.
link |
00:58:23.000
That's where suffering go.
link |
00:58:24.200
Do you so what do you think about the inequality of suffering
link |
00:58:27.400
that that we have to think about do you think we have to
link |
00:58:32.900
think about that as part of effective altruism?
link |
00:58:37.300
Yeah, I think we're just things vary in terms of whether
link |
00:58:41.800
you get benefits or costs from them just in relative terms
link |
00:58:45.000
or in absolute terms.
link |
00:58:46.700
So a lot of the time yeah, there's this hedonic treadmill
link |
00:58:49.300
where if you get you know, there's money is useful because
link |
00:58:56.800
it helps you buy things or good for you because it helps
link |
00:58:59.700
you buy things, but there's also a status component too
link |
00:59:02.500
and that status component is kind of zero sum if you were
link |
00:59:06.600
saying like in Russia, you know, no one else felt poor
link |
00:59:10.900
because everyone around you is poor.
link |
00:59:13.500
Whereas now you've got this these other people who are
link |
00:59:17.600
you know super rich and maybe that makes you feel.
link |
00:59:22.600
You know less good about yourself.
link |
00:59:24.100
There are some other things however, which are just
link |
00:59:27.300
intrinsically good or bad.
link |
00:59:28.800
So commuting for example, it's just people hate it.
link |
00:59:33.000
It doesn't really change knowing the other people are
link |
00:59:35.500
commuting to doesn't make it any any kind of less bad,
link |
00:59:40.000
but it's sort of to push back on that for a second.
link |
00:59:42.800
I mean, yes, but also if some people were, you know on
link |
00:59:48.300
horseback your commute on the train might feel a lot better.
link |
00:59:52.200
Yeah, you know the there is a relative Nick.
link |
00:59:55.400
I mean everybody's complaining about society today forgetting
link |
00:59:59.400
it's forgetting how much better is the better angels of
link |
01:00:04.400
our nature how the technologies improve fundamentally
link |
01:00:07.200
improving most of the world's lives.
link |
01:00:09.300
Yeah, and actually there's some psychological research
link |
01:00:13.000
on the well being benefits of volunteering where people
link |
01:00:16.800
who volunteer tend to just feel happier about their lives
link |
01:00:20.900
and one of the suggested explanations is it because it
link |
01:00:23.700
extends your reference class.
link |
01:00:25.600
So no longer you comparing yourself to the Joneses who
link |
01:00:28.700
have their slightly better car because you realize that
link |
01:00:31.500
you know people in much worse conditions than you and
link |
01:00:34.300
so now, you know your life doesn't seem so bad.
link |
01:00:37.900
That's actually on the psychological level.
link |
01:00:39.800
One of the fundamental benefits of effective altruism.
link |
01:00:42.700
Yeah is is I mean, I guess it's the altruism part of
link |
01:00:47.700
effective altruism is exposing yourself to the suffering
link |
01:00:51.700
in the world allows you to be more.
link |
01:00:55.700
Yeah happier and actually allows you in the sort of
link |
01:00:59.900
meditative introspective way realize that you don't need
link |
01:01:03.000
most of the wealth you have to to be happy.
link |
01:01:07.800
Absolutely.
link |
01:01:08.300
I mean, I think effective options have been this huge
link |
01:01:10.400
benefit for me and I really don't think that if I had
link |
01:01:13.400
more money that I was living on that that would change
link |
01:01:16.400
my level of well being at all.
link |
01:01:18.100
Whereas engaging in something that I think is meaningful
link |
01:01:21.500
that I think is stealing humanity in a positive direction.
link |
01:01:25.200
That's extremely rewarding.
link |
01:01:27.400
And so yeah, I mean despite my best attempts at sacrifice.
link |
01:01:32.500
Um, I don't you know, I think I've actually ended up
link |
01:01:35.000
happier as a result of engaging in effective altruism
link |
01:01:37.500
than I would have done.
link |
01:01:38.800
That's such an interesting idea.
link |
01:01:40.300
Yeah, so let's let's talk about animal welfare.
link |
01:01:43.200
Sure, easy question. What is consciousness?
link |
01:01:46.700
Yeah, especially as it has to do with the capacity to
link |
01:01:50.400
suffer. I think there seems to be a connection between
link |
01:01:53.600
how conscious something is the amount of consciousness
link |
01:01:57.400
and stability to suffer and that all comes into play
link |
01:02:01.100
about us thinking how much suffering there's in the
link |
01:02:03.300
world with regard to animals.
link |
01:02:05.600
So how do you think about animal welfare and consciousness?
link |
01:02:08.700
Okay.
link |
01:02:09.200
Well consciousness easy question.
link |
01:02:10.700
Okay.
link |
01:02:11.100
Um, yeah, I mean, I think we don't have a good understanding
link |
01:02:13.800
of consciousness.
link |
01:02:14.500
My best guess is it's got and by consciousness.
link |
01:02:17.000
I'm meaning what it is feels like to be you the subjective
link |
01:02:21.200
experience that's seems to be different from everything
link |
01:02:24.000
else we know about in the world.
link |
01:02:26.000
Yeah, I think it's clear.
link |
01:02:27.400
It's very poorly understood at the moment.
link |
01:02:29.400
I think it has something to do with information processing.
link |
01:02:32.000
So the fact that the brain is a computer or something
link |
01:02:35.000
like a computer.
link |
01:02:36.300
So that would mean that very advanced AI could be conscious
link |
01:02:40.300
of information processors in general could be conscious
link |
01:02:44.000
with some suitable complexity, but that also some suitable
link |
01:02:48.300
complexity.
link |
01:02:49.200
It's a question whether greater complexity creates some
link |
01:02:51.500
kind of greater consciousness which relates to animals.
link |
01:02:54.900
Yeah, right.
link |
01:02:55.600
Is there if it's an information processing system and it's
link |
01:02:59.400
smaller and smaller is an ant less conscious than a cow
link |
01:03:04.100
less conscious than a monkey.
link |
01:03:06.200
Yeah, and again this super hard question, but I think my
link |
01:03:10.900
best guess is yes, like if you if I think well consciousness,
link |
01:03:14.500
it's not some magical thing that appears out of nowhere.
link |
01:03:17.700
It's not you know, Descartes thought it was just comes in
link |
01:03:20.800
from this other realm and then enters through the pineal
link |
01:03:23.600
gland in your brain and that's kind of soul and it's conscious.
link |
01:03:28.400
So it's got something to do with what's going on in your
link |
01:03:30.200
brain.
link |
01:03:30.700
A chicken has one three hundredth of the size of the brain
link |
01:03:34.200
that you have ants.
link |
01:03:36.100
I don't know how small it is.
link |
01:03:37.500
Maybe it's a millionth the size my best guess which I may
link |
01:03:41.900
well be wrong about because this is so hard is that in some
link |
01:03:45.300
relevant sense the chicken is experiencing consciousness
link |
01:03:49.400
to a less degree than the human and the ants significantly
link |
01:03:51.900
less again.
link |
01:03:52.900
I don't think it's as little as three hundredth as much.
link |
01:03:55.400
I think there's everyone who's ever seen a chicken that's
link |
01:03:59.100
there's evolutionary reasons for thinking that like the
link |
01:04:02.500
ability to feel pain comes on the scene relatively early
link |
01:04:06.000
on and we have lots of our brain that's dedicated stuff
link |
01:04:08.800
that doesn't seem to have to do in anything to do with
link |
01:04:10.800
consciousness language processing and so on.
link |
01:04:13.900
So it seems like the easy so there's a lot of complicated
link |
01:04:16.900
questions there that we can't ask the animals about but
link |
01:04:21.300
it seems that there is easy questions in terms of suffering
link |
01:04:24.800
which is things like factory farming that could be addressed.
link |
01:04:29.400
Yeah, is that is that the lowest hanging fruit?
link |
01:04:32.300
If I may use crude terms here of animal welfare.
link |
01:04:37.000
Absolutely.
link |
01:04:37.700
I think that's the lowest hanging fruit.
link |
01:04:39.100
So at the moment we kill we raise and kill about 50 billion
link |
01:04:43.200
animals every year.
link |
01:04:44.600
So how many 50 billion in?
link |
01:04:48.000
Yeah, so for every human on the planet several times that
link |
01:04:52.300
number of being killed and the vast majority of them are
link |
01:04:55.200
raised in factory farms where basically whatever your view
link |
01:04:59.200
on animals, I think you should agree even if you think well,
link |
01:05:02.400
maybe it's not bad to kill an animal.
link |
01:05:03.900
Maybe if the animal was raised in good conditions, that's
link |
01:05:06.500
just not the empirical reality.
link |
01:05:07.900
The empirical reality is that they are kept in incredible
link |
01:05:11.700
cage confinement.
link |
01:05:12.900
They are de beaked or detailed without an aesthetic, you
link |
01:05:18.000
know chickens often peck each other to death other like
link |
01:05:20.900
otherwise because of them such stress.
link |
01:05:23.800
It's really, you know, I think when a chicken gets killed
link |
01:05:26.900
that's the best thing that happened to the chicken in the
link |
01:05:29.200
course of its life and it's also completely unnecessary.
link |
01:05:32.700
This is in order to save, you know a few pence for the price
link |
01:05:35.900
of meat or price of eggs and we have indeed found it's also
link |
01:05:41.400
just inconsistent with consumer preference as well people
link |
01:05:44.500
who buy the products if they could they all they when you
link |
01:05:49.000
do surveys are extremely against suffering in factory farms.
link |
01:05:52.800
It's just they don't appreciate how bad it is and you know,
link |
01:05:55.300
just tend to go with easy options.
link |
01:05:57.500
And so then the best the most effective programs I know of
link |
01:06:00.800
at the moment are nonprofits that go to companies and work
link |
01:06:04.700
with companies to get them to take a pledge to cut certain
link |
01:06:09.900
sorts of animal products like eggs from cage confinement
link |
01:06:13.200
out of their supply chain.
link |
01:06:14.700
And it's now the case that the top 50 food retailers and
link |
01:06:19.400
fast food companies have all made these kind of cage free
link |
01:06:23.700
pledges and when you do the numbers you get the conclusion
link |
01:06:27.000
that every dollar you're giving to these nonprofits result
link |
01:06:29.800
in hundreds of chickens being spared from cage confinement.
link |
01:06:33.300
And then they're working to other other types of animals
link |
01:06:37.600
other products too.
link |
01:06:39.300
So is that the most effective way to do in have a ripple
link |
01:06:43.300
effect essentially it's supposed to directly having regulation
link |
01:06:48.100
from on top that says you can't do this.
link |
01:06:51.500
So I would be more open to the regulation approach, but
link |
01:06:55.500
at least in the US there's quite intense regulatory capture
link |
01:06:59.100
from the agricultural industry.
link |
01:07:01.000
And so attempts that we've seen to try and change regulation
link |
01:07:05.800
have it's been a real uphill struggle.
link |
01:07:08.700
There are some examples of ballot initiatives where the
link |
01:07:13.300
people have been able to vote in a ballot to say we want
link |
01:07:16.500
to ban eggs from cage conditions and that's been huge.
link |
01:07:19.600
That's been really good, but beyond that it's much more
link |
01:07:22.600
limited. So I've been really interested in the idea of
link |
01:07:27.500
hunting in general and wild animals and seeing nature as
link |
01:07:32.800
a form of cruelty that I am ethically more okay with.
link |
01:07:41.400
Okay, just from my perspective and then I read about wild
link |
01:07:46.100
animal suffering that I'm just I'm just giving you the
link |
01:07:48.900
kind of yeah notion of how I felt because animal because
link |
01:07:53.900
animal factory farming is so bad.
link |
01:07:57.000
Yeah that living in the woods seem good.
link |
01:08:00.100
Yeah, and yet when you actually start to think about it
link |
01:08:04.300
all I mean all of the animals in the animal world the
link |
01:08:08.600
living in like terrible poverty, right?
link |
01:08:11.300
Yeah.
link |
01:08:11.600
Yeah, so you have all the medical conditions all of that.
link |
01:08:15.100
I mean they're living horrible lives.
link |
01:08:17.000
It could be improved.
link |
01:08:18.700
That's a really interesting notion that I think may not
link |
01:08:21.400
even be useful to talk about because factory farming is
link |
01:08:24.600
such a big thing to focus on.
link |
01:08:26.500
Yeah, but it's nevertheless an interesting notion to think
link |
01:08:29.800
of all the animals in the wild as suffering in the same
link |
01:08:32.900
way that humans in poverty are suffering.
link |
01:08:34.900
Yeah, I mean and often even worse so many animals we
link |
01:08:38.400
produce by our selection.
link |
01:08:39.800
So you have a very large number of children in the expectation
link |
01:08:44.700
that only a small number survive.
link |
01:08:46.700
And so for those animals almost all of them just live short
link |
01:08:49.900
lives where they starve to death.
link |
01:08:53.100
So yeah, there's huge amounts of suffering in nature that
link |
01:08:55.100
I don't think we should you know pretend that it's this kind
link |
01:09:00.000
of wonderful paradise for most animals.
link |
01:09:04.900
Yeah, their life is filled with hunger and fear and disease.
link |
01:09:10.400
Yeah, I did agree with you entirely that when it comes
link |
01:09:13.600
to focusing on animal welfare, we should focus in factory
link |
01:09:15.700
farming, but we also yeah should be aware to the reality
link |
01:09:20.400
of what life for most animals is like.
link |
01:09:22.300
So let's talk about a topic I've talked a lot about and
link |
01:09:26.400
you've actually quite eloquently talked about which is the
link |
01:09:29.700
third priority that effective altruism considers is really
link |
01:09:34.900
important is existential risks.
link |
01:09:37.600
Yeah, when you think about the existential risks that
link |
01:09:41.500
are facing our civilization, what's before us?
link |
01:09:45.600
What concerns you?
link |
01:09:46.600
What should we be thinking about from in the especially
link |
01:09:49.200
from an effective altruism perspective?
link |
01:09:51.100
Great. So the reason I started getting concerned about
link |
01:09:53.900
this was thinking about future generations where the key
link |
01:09:59.500
idea is just well future people matter morally.
link |
01:10:03.200
There are vast numbers of future people.
link |
01:10:05.300
If we don't cause our own extinction, there's no reason
link |
01:10:07.400
why civilization might not last a million years.
link |
01:10:11.900
I mean we last as long as a typical mammalian species
link |
01:10:14.500
or a billion years is when the Earth is no longer habitable
link |
01:10:18.700
or if we can take to the stars then perhaps it's trillions
link |
01:10:21.500
of years beyond that.
link |
01:10:23.100
So the future could be very big indeed and it seems like
link |
01:10:25.500
we're potentially very early on in civilization.
link |
01:10:29.000
Then the second idea is just well, maybe there are things
link |
01:10:31.100
that are going to really derail that things that actually
link |
01:10:33.600
could prevent us from having this long wonderful civilization
link |
01:10:37.400
and instead could cause our own cause our own extinction
link |
01:10:43.900
or otherwise perhaps like lock ourselves into a very bad
link |
01:10:48.100
state. And what ways could that happen?
link |
01:10:53.100
Well causing our own extinction development of nuclear
link |
01:10:56.700
weapons in the 20th century at least put on the table
link |
01:11:00.600
that we now had weapons that were powerful enough that
link |
01:11:04.100
you could very significantly destroy society perhaps
link |
01:11:07.600
and all that nuclear war would cause a nuclear winter.
link |
01:11:09.900
Perhaps that would be enough for the human race to go
link |
01:11:14.100
extinct.
link |
01:11:14.700
Why do you think we haven't done it? Sorry to interrupt.
link |
01:11:18.000
Why do you think we haven't done it yet?
link |
01:11:19.300
Is it surprising to you that having, you know, always
link |
01:11:26.800
for the past few decades several thousand of active ready
link |
01:11:30.500
to launch nuclear weapons warheads and yet we have not
link |
01:11:35.400
launched them ever since the initial launch on Hiroshima
link |
01:11:42.100
and Nagasaki.
link |
01:11:42.900
I think it's a mix of luck.
link |
01:11:46.400
So I think it's definitely not inevitable that we haven't
link |
01:11:48.300
used them.
link |
01:11:49.300
So John F. Kennedy, general Cuban Missile Crisis put the
link |
01:11:52.300
estimate of nuclear exchange between the US and USSR
link |
01:11:55.700
that somewhere between one and three and even so, you know,
link |
01:11:59.100
we really did come close.
link |
01:12:03.000
At the same time, I do think mutually assured destruction
link |
01:12:06.900
is a reason why people don't go to war.
link |
01:12:08.600
It would be, you know, why nuclear powers don't go to war.
link |
01:12:11.900
Do you think that holds if you can linger on that for a
link |
01:12:15.200
second, like my dad is a physicist amongst other things
link |
01:12:20.600
and he believes that nuclear weapons are actually just
link |
01:12:24.900
really hard to build which is one of the really big benefits
link |
01:12:29.600
of them currently so that you don't have it's very hard
link |
01:12:34.600
if you're crazy to build to acquire a nuclear weapon.
link |
01:12:38.700
So the mutually shared destruction really works when you
link |
01:12:41.200
talk seems to work better when it's nation states, when
link |
01:12:46.200
it's serious people, even if they're a little bit, you
link |
01:12:49.900
know, dictatorial and so on.
link |
01:12:52.900
Do you think this mutually sure destruction idea will
link |
01:12:56.200
carry how far will it carry us in terms of different kinds
link |
01:13:01.000
of weapons?
link |
01:13:02.200
Oh, yeah, I think it's your point that nuclear weapons
link |
01:13:06.700
are very hard to build and relatively easy to control
link |
01:13:09.600
because you can control fissile material is a really
link |
01:13:12.700
important one and future technology that's equally destructive
link |
01:13:16.000
might not have those properties.
link |
01:13:18.500
So for example, if in the future people are able to design
link |
01:13:23.700
viruses, perhaps using a DNA printing kit that's on that,
link |
01:13:29.600
you know, one can just buy.
link |
01:13:31.300
In fact, there are companies in the process of creating
link |
01:13:37.500
home DNA printing kits. Well, then perhaps that's just
link |
01:13:42.800
totally democratized.
link |
01:13:44.000
Perhaps the power to reap huge destructive potential is
link |
01:13:48.600
in the hands of most people in the world or certainly
link |
01:13:52.000
most people with effort and then yeah, I no longer trust
link |
01:13:55.300
mutually assured destruction because some for some people
link |
01:13:59.500
the idea that they would die is just not a disincentive.
link |
01:14:03.600
There was a Japanese cult, for example.
link |
01:14:05.200
Ohm Shinrikyo in the 90s that had they what they believed
link |
01:14:10.400
was that Armageddon was coming if you died before Armageddon,
link |
01:14:14.800
you would get good karma.
link |
01:14:17.200
You wouldn't go to hell if you died during Armageddon.
link |
01:14:20.300
Maybe you would go to hell and they had a biological weapons
link |
01:14:25.500
program chemical weapons program when they were finally
link |
01:14:28.600
apprehended.
link |
01:14:29.300
They hadn't stocks of southern gas that were sufficient to
link |
01:14:33.500
kill 4 million people engaged in multiple terrorist acts.
link |
01:14:36.900
If they had had the ability to print a virus at home,
link |
01:14:40.300
that would have been very scary.
link |
01:14:42.500
So it's not impossible to imagine groups of people that
link |
01:14:45.900
hold that kind of belief of death as suicide as a good
link |
01:14:54.200
thing for passage into the next world and so on and then
link |
01:14:58.100
connect them with some weapons then ideology and weaponry
link |
01:15:04.400
may create serious problems for us.
link |
01:15:07.000
Let me ask you a quick question on what do you think is
link |
01:15:09.800
the line between killing most humans and killing all humans?
link |
01:15:14.300
How hard is it to kill everybody?
link |
01:15:17.600
Yeah, have you thought about this?
link |
01:15:19.800
I've thought about it a bit.
link |
01:15:20.700
I think it is very hard to kill everybody.
link |
01:15:22.600
So in the case of let's say an all out nuclear exchange
link |
01:15:26.600
and let's say that leads to nuclear winter.
link |
01:15:28.300
We don't really know but we you know might well happen
link |
01:15:34.400
that would I think result in billions of deaths would
link |
01:15:38.300
it kill everybody?
link |
01:15:39.500
It's quite it's quite hard to see how that how it would
link |
01:15:42.600
kill everybody for a few reasons.
link |
01:15:45.500
One is just those are so many people.
link |
01:15:47.900
Yes, you know seven and a half billion people.
link |
01:15:49.600
So this bad event has to kill all you know, all almost
link |
01:15:54.200
all of them.
link |
01:15:54.800
Secondly live in such a diversity of locations.
link |
01:15:57.600
So a nuclear exchange or the virus that has to kill people
link |
01:16:00.800
who live in the coast of New Zealand which is going to
link |
01:16:04.600
be climatically much more stable than other areas in the
link |
01:16:08.700
world or people who are on submarines or who have access
link |
01:16:14.400
to bunkers.
link |
01:16:15.000
So there's a very like there's just like I'm sure there's
link |
01:16:18.000
like two guys in Siberia just badass.
link |
01:16:20.800
There's the just human nature somehow just perseveres.
link |
01:16:25.400
Yeah, and then the second thing is just if there's some
link |
01:16:28.400
catastrophic event people really don't want to die.
link |
01:16:31.600
So there's going to be like, you know, huge amounts of
link |
01:16:34.200
effort to ensure that it doesn't affect everyone.
link |
01:16:37.100
Have you thought about what it takes to rebuild a society
link |
01:16:42.200
with smaller smaller numbers like how big of a setback
link |
01:16:45.400
these kinds of things are?
link |
01:16:47.200
Yeah, so then that's something where there's real uncertainty
link |
01:16:50.100
I think where at some point you just lose genetic sufficient
link |
01:16:55.100
genetic diversity such that you can't come back.
link |
01:16:58.300
There's it's unclear how small that population is.
link |
01:17:03.700
But if you've only got say a thousand people or fewer
link |
01:17:07.300
than a thousand, then maybe that's small enough.
link |
01:17:09.100
What about human knowledge and then there's human knowledge.
link |
01:17:14.900
I mean, it's striking how short on geological timescales
link |
01:17:19.400
or evolutionary timescales the progress in or how quickly
link |
01:17:23.200
the progress in human knowledge has been like agriculture.
link |
01:17:26.000
We only invented in 10,000 BC cities were only, you know,
link |
01:17:31.600
3000 BC whereas typical mammal species is half a million
link |
01:17:35.500
years to a million years.
link |
01:17:37.400
Do you think it's inevitable in some sense agriculture
link |
01:17:40.200
everything that came the Industrial Revolution cars planes
link |
01:17:45.800
the internet that level of innovation you think is inevitable.
link |
01:17:50.700
I think given how quickly it arose.
link |
01:17:55.200
So in the case of agriculture, I think that was dependent
link |
01:17:58.000
on climate.
link |
01:17:58.500
So it was the kind of glacial period was over the earth
link |
01:18:05.600
warmed up a bit that made it much more likely that humans
link |
01:18:10.300
would develop agriculture when it comes to the Industrial
link |
01:18:14.000
Revolution. It's just you know, again only took a few thousand
link |
01:18:19.100
years from cities to Industrial Revolution if we think okay,
link |
01:18:22.700
we've gone back to this even let's say agricultural era,
link |
01:18:27.300
but there's no reason why we wouldn't go extinct in the
link |
01:18:29.600
coming tens of thousands of years or hundreds of thousands
link |
01:18:32.200
of years.
link |
01:18:33.100
It seems just vet.
link |
01:18:34.200
It would be very surprising if we didn't rebound unless
link |
01:18:37.500
there's some special reason that makes things different.
link |
01:18:40.000
Yes.
link |
01:18:40.400
So perhaps we just have a much greater like disease burden
link |
01:18:44.600
now so HIV exists.
link |
01:18:46.600
It didn't exist before and perhaps that's kind of latent
link |
01:18:50.500
and you know and being suppressed by modern medicine
link |
01:18:53.500
and sanitation and so on but would be a much bigger problem
link |
01:18:57.800
for some, you know, utterly destroyed the society that
link |
01:19:02.600
was trying to rebound or there's just maybe there's something
link |
01:19:06.600
we don't know about.
link |
01:19:07.500
So another existential risk comes from the mysterious the
link |
01:19:14.400
beautiful artificial intelligence.
link |
01:19:16.600
Yeah.
link |
01:19:17.500
So what what's the shape of your concerns about AI?
link |
01:19:22.700
I think there are quite a lot of concerns about AI and
link |
01:19:25.300
sometimes the different risks don't get distinguished enough.
link |
01:19:30.400
So the kind of classic worry most is closely associated
link |
01:19:35.400
with Nick Bostrom and Elias Joukowski is that we at some
link |
01:19:39.900
point move from having narrow AI systems to artificial
link |
01:19:43.000
general intelligence.
link |
01:19:44.400
You get this very fast feedback effect where AGI is able
link |
01:19:48.300
to build, you know, artificial intelligence helps you to
link |
01:19:51.300
build greater artificial intelligence.
link |
01:19:53.900
We have this one system that suddenly very powerful far
link |
01:19:57.100
more powerful than others than perhaps far more powerful
link |
01:20:01.000
than, you know, the rest of the world combined and then
link |
01:20:07.000
secondly, it has goals that are misaligned with human goals.
link |
01:20:10.400
And so it pursues its own goals.
link |
01:20:13.000
It realize, hey, there's this competition namely from humans.
link |
01:20:16.500
It would be better if we eliminated them in just the same
link |
01:20:19.300
way as homo sapiens eradicated the Neanderthals.
link |
01:20:22.700
In fact, it in fact killed off most large animals on the
link |
01:20:28.400
planet that walk the planet. So that's kind of one set of
link |
01:20:32.200
worries. I think that's not my I think these shouldn't
link |
01:20:37.700
be dismissed as science fiction.
link |
01:20:41.000
I think it's something we should be taking very seriously,
link |
01:20:44.800
but it's not the thing you visualize when you're concerned
link |
01:20:47.200
about the biggest near term.
link |
01:20:49.700
Yeah, I think it's I think it's like one possible scenario
link |
01:20:54.100
that would be astronomically bad.
link |
01:20:55.500
I think that other scenarios that would also be extremely
link |
01:20:57.900
bad comparably bad that are more likely to occur.
link |
01:21:01.000
So one is just we are able to control AI.
link |
01:21:05.600
So we're able to get it to do what we want it to do.
link |
01:21:10.000
And perhaps there's not like this fast takeoff of AI capabilities
link |
01:21:13.600
within a single system.
link |
01:21:14.700
It's distributed across many systems that do somewhat different
link |
01:21:17.900
things, but you do get very rapid economic and technological
link |
01:21:23.400
progress as a result that concentrates power into the hands
link |
01:21:27.000
of a very small number of individuals, perhaps a single
link |
01:21:29.600
dictator. And secondly, that single individual is or small
link |
01:21:35.500
group of individuals or single country is then able to like
link |
01:21:38.400
lock in their values indefinitely via transmitting those
link |
01:21:43.100
values to artificial systems that have no reason to die
link |
01:21:46.400
like, you know, their code is copyable.
link |
01:21:49.500
Perhaps, you know, Donald Trump or Xi Jinping creates their
link |
01:21:53.900
kind of AI progeny in their own image. And once you have
link |
01:21:58.200
a system that's once you have a society that's controlled
link |
01:22:02.200
by AI, you no longer have one of the main drivers of change
link |
01:22:06.400
historically, which is the fact that human lifespans are
link |
01:22:10.600
you know, only a hundred years give or take.
link |
01:22:12.300
So that's really interesting.
link |
01:22:13.200
So as opposed to sort of killing off all humans is locking
link |
01:22:18.100
in creating a hell on earth, basically a set of principles
link |
01:22:25.000
under which the society operates that's extremely undesirable.
link |
01:22:28.900
So everybody is suffering indefinitely.
link |
01:22:31.200
Or it doesn't, I mean, it also doesn't need to be hell on
link |
01:22:33.900
earth. It could just be the long values.
link |
01:22:35.700
So we talked at the very beginning about how I want to
link |
01:22:40.400
see this kind of diversity of different values and exploration
link |
01:22:43.300
so that we can just work out what is kind of morally like
link |
01:22:46.900
what is good, what is bad and then pursue the thing that's
link |
01:22:49.600
bad. So actually, so the idea of wrong values is actually
link |
01:22:55.000
probably the beautiful thing is there's no such thing as
link |
01:22:59.200
right and wrong values because we don't know the right
link |
01:23:01.200
answer. We just kind of have a sense of which value is more
link |
01:23:04.700
right, which is more wrong.
link |
01:23:06.500
So any kind of lock in makes a value wrong because it
link |
01:23:10.500
prevents exploration of this kind.
link |
01:23:13.000
Yeah, and just, you know, imagine if fascist value, you
link |
01:23:17.500
know, imagine if there was Hitler's utopia or Stalin's utopia
link |
01:23:21.000
or Donald Trump's or Xi Jinping's forever.
link |
01:23:24.100
Yeah, you know, how good or bad would that be compared
link |
01:23:28.900
to the best possible future we could create? And my suggestion
link |
01:23:33.400
is it would really suck compared to the best possible
link |
01:23:36.200
future we could create.
link |
01:23:37.000
And you're just one individual.
link |
01:23:38.400
There's some individuals for whom Donald Trump is perhaps
link |
01:23:44.400
the best possible future.
link |
01:23:46.100
And so that's the whole point of us individuals exploring
link |
01:23:49.900
the space together.
link |
01:23:51.000
Exactly.
link |
01:23:51.500
Yeah, and what's trying to figure out which is the path
link |
01:23:54.800
that will make America great again.
link |
01:23:56.500
Yeah, exactly.
link |
01:23:58.200
So how can effective altruism help?
link |
01:24:03.200
I mean, this is a really interesting notion they actually
link |
01:24:05.100
describing of artificial intelligence being used as extremely
link |
01:24:09.800
powerful technology in the hands of very few potentially
link |
01:24:13.300
one person to create some very undesirable effect.
link |
01:24:17.300
So as opposed to AI and again, the source of the undesirableness
link |
01:24:21.300
there is the human.
link |
01:24:23.000
Yeah, AI is just a really powerful tool.
link |
01:24:26.200
So whether it's that or whether AI's AGI just runs away
link |
01:24:30.500
from us completely.
link |
01:24:31.600
How as individuals, as people in the effective altruism
link |
01:24:38.400
movement, how can we think about something like this?
link |
01:24:41.100
I understand poverty and animal welfare, but this is a far
link |
01:24:44.200
out incredibly mysterious and difficult problem.
link |
01:24:47.500
Great.
link |
01:24:47.800
Well, I think there's three paths as an individual.
link |
01:24:50.600
So if you're thinking about, you know, career paths you
link |
01:24:55.400
can pursue.
link |
01:24:56.000
So one is going down the line of technical AI safety.
link |
01:24:59.100
So this is most relevant to the kind of AI winning AI taking
link |
01:25:05.800
over scenarios where this is just technical work on current
link |
01:25:10.700
machine learning systems often sometimes going more theoretical
link |
01:25:13.600
to on how we can ensure that an AI is able to learn human
link |
01:25:17.800
values and able to act in the way that you want it to act.
link |
01:25:21.500
And that's a pretty mainstream issue and approach in machine
link |
01:25:26.800
learning today.
link |
01:25:27.500
So, you know, we definitely need more people doing that.
link |
01:25:31.400
Second is on the policy side of things, which I think is
link |
01:25:34.100
even more important at the moment, which is how should developments
link |
01:25:40.400
in AI be managed on a political level?
link |
01:25:43.200
How can you ensure that the benefits of AI are very distributed?
link |
01:25:47.600
It's not being, power isn't being concentrated in the hands
link |
01:25:50.500
of a small set of individuals.
link |
01:25:54.200
How do you ensure that there aren't arms races between different
link |
01:25:59.100
AI companies that might result in them, you know, cutting corners
link |
01:26:06.000
with respect to safety.
link |
01:26:07.200
And so there the input as individuals who can have is this.
link |
01:26:11.000
We're not talking about money.
link |
01:26:12.300
We're talking about effort.
link |
01:26:14.000
We're talking about career choices.
link |
01:26:15.600
We're talking about career choice.
link |
01:26:16.900
Yeah, but then it is the case that supposing, you know, you're
link |
01:26:20.700
like, I've already decided my career.
link |
01:26:22.200
I'm doing something quite different.
link |
01:26:24.500
You can contribute with money too, where at the Center for Effective
link |
01:26:28.000
Altruism, we set up the Long Term Future Fund.
link |
01:26:31.400
So if you go on to effectivealtruism.org, you can donate where
link |
01:26:36.800
a group of individuals will then work out what's the highest value
link |
01:26:40.600
place they can donate to work on existential risk issues with
link |
01:26:44.200
a particular focus on AI.
link |
01:26:46.900
What's path number three?
link |
01:26:48.400
This was path number three.
link |
01:26:49.500
This is donations with the third option I was thinking of.
link |
01:26:53.400
Okay.
link |
01:26:53.900
And then, yeah, there are, you can also donate directly to organizations
link |
01:26:58.500
working on this, like Center for Human Compatible AI at Berkeley,
link |
01:27:01.900
Future of Humanity Institute at Oxford, or other organizations too.
link |
01:27:08.500
Does AI keep you up at night?
link |
01:27:10.200
This kind of concern?
link |
01:27:13.000
Yeah, it's kind of a mix where I think it's very likely things are
link |
01:27:17.300
going to go well. I think we're going to be able to solve these
link |
01:27:21.400
problems. I think that's by far the most likely outcome, at least
link |
01:27:25.500
over the next.
link |
01:27:25.900
By far the most likely.
link |
01:27:26.800
So if you look at all the trajectories running away from our
link |
01:27:30.800
current moment in the next hundred years, you see AI creating
link |
01:27:36.600
destructive consequences as a small subset of those possible
link |
01:27:41.300
trajectories.
link |
01:27:41.700
Or at least, yeah, kind of eternal, destructive consequences.
link |
01:27:44.900
I think that being a small subset.
link |
01:27:46.500
At the same time, it still freaks me out.
link |
01:27:48.500
I mean, when we're talking about the entire future of civilization,
link |
01:27:51.600
then small probabilities, you know, 1% probability, that's terrifying.
link |
01:27:56.900
What do you think about Elon Musk's strong worry that we should
link |
01:28:02.500
be really concerned about existential risks of AI?
link |
01:28:05.200
Yeah, I mean, I think, you know, broadly speaking, I think he's
link |
01:28:09.100
right.
link |
01:28:09.300
I think if we talked, we would probably have very different
link |
01:28:13.200
probabilities on how likely it is that we're doomed.
link |
01:28:16.200
But again, when it comes to talking about the entire future of
link |
01:28:19.700
civilization, it doesn't really matter if it's 1% or if it's
link |
01:28:23.200
50%, we ought to be taking every possible safeguard we can to
link |
01:28:26.700
ensure that things go well rather than poorly.
link |
01:28:30.300
Last question, if you yourself could eradicate one problem from
link |
01:28:34.000
the world, what would that problem be?
link |
01:28:35.700
That's a great question.
link |
01:28:37.600
I don't know if I'm cheating in saying this, but I think the
link |
01:28:42.900
thing I would most want to change is just the fact that people
link |
01:28:45.300
don't actually care about ensuring the long run future goes well.
link |
01:28:50.500
People don't really care about future generations.
link |
01:28:52.500
They don't think about it.
link |
01:28:53.300
It's not part of their aims.
link |
01:28:54.300
In some sense, you're not cheating at all because in speaking
link |
01:28:58.800
the way you do, in writing the things you're writing, you're
link |
01:29:02.200
doing, you're addressing exactly this aspect.
link |
01:29:05.800
Exactly.
link |
01:29:06.500
That is your input into the effective altruism movement.
link |
01:29:10.800
So for that, Will, thank you so much.
link |
01:29:12.900
It's an honor to talk to you.
link |
01:29:14.300
I really enjoyed it.
link |
01:29:15.000
Thanks so much for having me on.
link |
01:30:10.300
If that were the case, we'd probably be pretty generous.
link |
01:30:13.300
Next round's on me, but that's effectively the situation we're
link |
01:30:17.500
in all the time.
link |
01:30:18.800
It's like a 99% off sale or buy one get 99 free.
link |
01:30:23.400
Might be the most amazing deal you'll see in your life.
link |
01:30:27.000
Thank you for listening and hope to see you next time.