back to index

William MacAskill: Effective Altruism | Lex Fridman Podcast #84


small model | large model

link |
00:00:00.000
The following is a conversation with William McCaskill.
link |
00:00:03.600
He's a philosopher, ethicist, and one of the originators
link |
00:00:06.960
of the effective altruism movement.
link |
00:00:09.280
His research focuses on the fundamentals
link |
00:00:11.480
of effective altruism or the use of evidence and reason
link |
00:00:14.960
to help others by as much as possible
link |
00:00:17.480
with our time and money, with a particular concentration
link |
00:00:21.160
on how to act given moral uncertainty.
link |
00:00:24.360
He's the author of Doing Good, Better, Effective Altruism,
link |
00:00:28.600
and a radical new way to make a difference.
link |
00:00:31.200
He is a cofounder and the president
link |
00:00:33.880
of the Center of Effective Altruism, CEA,
link |
00:00:37.120
that encourages people to commit to donate at least 10%
link |
00:00:40.680
of their income to the most effective charities.
link |
00:00:43.960
He cofounded 80,000 Hours, which is a nonprofit
link |
00:00:47.200
that provides research and advice on how you can best
link |
00:00:50.200
make a difference through your career.
link |
00:00:52.640
This conversation was recorded before the outbreak
link |
00:00:55.600
of the coronavirus pandemic.
link |
00:00:57.880
For everyone feeling the medical, psychological,
link |
00:01:00.360
and financial burden of this crisis,
link |
00:01:02.400
I'm sending love your way.
link |
00:01:04.280
Stay strong, we're in this together, we'll beat this thing.
link |
00:01:09.200
This is the Artificial Intelligence Podcast.
link |
00:01:12.000
If you enjoy it, subscribe on YouTube,
link |
00:01:14.120
review it with five stars on Apple Podcast,
link |
00:01:16.280
support on Patreon, or simply connect with me on Twitter,
link |
00:01:19.480
Alex Friedman, spelled F R I D M A N.
link |
00:01:23.240
As usual, I'll do one or two minutes of ads now,
link |
00:01:25.840
and never any ads in the middle
link |
00:01:27.360
that can break the flow of the conversation.
link |
00:01:29.800
I hope that works for you
link |
00:01:31.200
and doesn't hurt the listening experience.
link |
00:01:34.800
This show was presented by Cash App,
link |
00:01:36.800
the number one finance app in the App Store.
link |
00:01:39.080
When you get it, use code LEX Podcast.
link |
00:01:42.160
Cash App lets you send money to your friends,
link |
00:01:44.160
buy Bitcoin, and invest in the stock market
link |
00:01:46.320
with as little as $1.
link |
00:01:48.960
Since Cash App allows you to send
link |
00:01:50.520
and receive money digitally, peer to peer,
link |
00:01:52.880
and security in all digital transactions is very important,
link |
00:01:56.200
let me mention that PCI data security standard
link |
00:01:59.360
that Cash App is compliant with.
link |
00:02:01.440
I'm a big fan of standards for safety and security.
link |
00:02:04.360
PCI DSS is a good example of that.
link |
00:02:07.240
Where a bunch of competitors got together and agreed
link |
00:02:10.120
that there needs to be a global standard
link |
00:02:11.880
around the security of transactions.
link |
00:02:14.520
Now, we just need to do the same for autonomous vehicles
link |
00:02:17.400
and AI systems in general.
link |
00:02:19.360
So again, if you get Cash App from the App Store,
link |
00:02:21.800
Google Play, and use the code LEX Podcast,
link |
00:02:25.040
you get $10, and Cash App will also donate $10 to FIRST,
link |
00:02:28.880
an organization that is helping to advance robotics
link |
00:02:31.600
and STEM education for young people around the world.
link |
00:02:34.640
And now, here's my conversation with William McCaskill.
link |
00:02:39.240
What does utopia for humans and all life on earth
link |
00:02:42.080
look like for you?
link |
00:02:43.480
That's a great question.
link |
00:02:45.400
What I wanna say is that we don't know,
link |
00:02:49.320
and the utopia we want to get to
link |
00:02:52.280
is an indirect one that I call the long reflection.
link |
00:02:55.520
So a period of post scarcity,
link |
00:02:57.920
no longer have the kind of urgent problems we have today,
link |
00:03:01.280
but instead can spend perhaps it's tens of thousands
link |
00:03:04.240
of years debating, engaging in ethical reflection
link |
00:03:08.120
in order before we take any kind of drastic lock in
link |
00:03:12.160
actions like spreading to the stars,
link |
00:03:14.560
and then we can figure out what is right,
link |
00:03:19.000
what is of kind of moral value.
link |
00:03:20.560
The long reflection, that's a really beautiful term.
link |
00:03:25.160
So if we look at Twitter for just a second,
link |
00:03:29.680
do you think human beings are able to reflect
link |
00:03:34.400
in a productive way?
link |
00:03:37.440
I don't mean to make it sound bad
link |
00:03:39.600
because there is a lot of fights and politics
link |
00:03:42.720
and division in our discourse.
link |
00:03:45.040
Maybe if you zoom out, it actually is civilized discourse.
link |
00:03:48.960
It might not feel like it, but when you zoom out.
link |
00:03:51.800
So I don't wanna say that Twitter is not civilized discourse.
link |
00:03:55.200
I actually believe it's more civilized
link |
00:03:57.000
than people give it credit for.
link |
00:03:58.520
But do you think the long reflection can actually be stable
link |
00:04:03.720
where we as human beings with our descendants of a brains
link |
00:04:08.440
would be able to sort of rationally discuss things
link |
00:04:11.360
together and arrive at ideas?
link |
00:04:13.200
I think overall, we're pretty good
link |
00:04:17.600
at discussing things rationally
link |
00:04:19.840
and at least in the earliest stages of our lives
link |
00:04:26.280
being open to many different ideas
link |
00:04:28.520
and being able to be convinced and change our views.
link |
00:04:33.400
I think that Twitter is designed almost
link |
00:04:36.480
to bring out all of the worst tendencies.
link |
00:04:38.800
So if the long reflection were conducted on Twitter,
link |
00:04:43.320
maybe it would be better just not even to bother.
link |
00:04:46.280
But I think the challenge really is getting to a stage
link |
00:04:50.320
where we have a society that is as conducive as possible
link |
00:04:55.760
to rational reflection, to deliberation.
link |
00:04:59.080
I think we're actually very lucky
link |
00:05:01.320
to be in a liberal society where people are able
link |
00:05:04.680
to discuss a lot of ideas and so on.
link |
00:05:06.960
I think when we look to the future,
link |
00:05:08.160
that's not at all guaranteed that society would be like that
link |
00:05:12.440
rather than a society where there's a fixed canon
link |
00:05:16.000
of values that are being imposed on all of society
link |
00:05:20.720
and where you aren't able to question that.
link |
00:05:22.400
That would be very bad for my perspective
link |
00:05:24.040
because it means we wouldn't be able
link |
00:05:25.840
to figure out what the truth is.
link |
00:05:28.000
I can already sense we're gonna go down a million
link |
00:05:30.720
tangents, but what do you think is the,
link |
00:05:36.840
if Twitter's not optimal, what kind of mechanism
link |
00:05:40.200
in this modern age of technology can we design
link |
00:05:44.480
where the exchange of ideas could be both civilized
link |
00:05:48.560
and productive and yet not be too constrained
link |
00:05:52.640
where there's rules of what you can say and can't say,
link |
00:05:55.360
which is, as you say, is not desirable,
link |
00:05:57.880
but yet not have some limits
link |
00:06:00.640
of what can be said or not and so on.
link |
00:06:02.800
Do you have any ideas, thoughts on the possible future?
link |
00:06:05.760
Of course, nobody knows how to do it,
link |
00:06:07.240
but do you have thoughts
link |
00:06:08.760
of what a better Twitter might look like?
link |
00:06:10.880
I think that text based media are intrinsically
link |
00:06:14.640
gonna be very hard to be conducive to rational discussion
link |
00:06:20.000
because if you think about it
link |
00:06:22.000
from an informational perspective,
link |
00:06:24.160
if I just send you a text of less than,
link |
00:06:27.320
what is it now, 240 characters, 280 characters, I think,
link |
00:06:31.760
that's a tiny amount of information
link |
00:06:33.880
compared to, say, you and I talking now
link |
00:06:36.120
where you have access to the words I say,
link |
00:06:38.440
which is the same as in text,
link |
00:06:40.200
but also my tone, also my body language
link |
00:06:43.840
and we're very poorly designed to be able to assess.
link |
00:06:47.840
I have to read all of this context
link |
00:06:49.400
into anything you say, so I say,
link |
00:06:52.920
maybe your partner sends you a text
link |
00:06:54.520
and has a full stop at the end.
link |
00:06:56.600
Are they mad at you?
link |
00:06:58.040
You don't know, you have to infer everything
link |
00:07:00.960
about this person's mental state
link |
00:07:02.440
from whether they put a full stop at the end of a text or not.
link |
00:07:04.720
Well, the flip side of that is it truly text
link |
00:07:07.840
that's the problem here
link |
00:07:08.800
because there's a viral aspect to the text
link |
00:07:14.760
where it's you could just post text nonstop,
link |
00:07:17.280
it's very immediate.
link |
00:07:19.680
The times before Twitter, before the internet,
link |
00:07:23.120
the way you would exchange text is you would write books.
link |
00:07:28.520
And that, while it doesn't get body language,
link |
00:07:30.880
it doesn't get tone, it doesn't, so on,
link |
00:07:33.720
but it does actually boil down after some time
link |
00:07:36.320
thinking some editing, so on, boil down ideas.
link |
00:07:39.440
So is the immediacy and the viral nature
link |
00:07:45.480
which produces the outrage mobs and so on
link |
00:07:47.840
the potential problem?
link |
00:07:49.440
I think that is a big issue.
link |
00:07:51.120
I think there's gonna be the strong selection effect
link |
00:07:53.240
where something that provokes outrage,
link |
00:07:57.760
well, that's high arousal,
link |
00:07:59.000
you're more likely to retweet that
link |
00:08:03.200
where there's kind of sober analysis
link |
00:08:06.000
is not as sexy, not as viral.
link |
00:08:08.800
I do agree that long form content
link |
00:08:11.640
is much better to productive discussion.
link |
00:08:16.440
In terms of the media that are very popular at the moment,
link |
00:08:19.440
I think that podcasting is great
link |
00:08:21.720
where like your podcasts are two hours long,
link |
00:08:25.400
so they're much more in depth than Twitter are.
link |
00:08:28.960
And you are able to convey so much more nuance,
link |
00:08:33.440
so much more caveat because it's an actual conversation.
link |
00:08:36.800
It's more like the sort of communication
link |
00:08:38.880
that we've evolved to do rather than kind of
link |
00:08:41.640
these very small little snippets of ideas
link |
00:08:43.880
that when also combined with bad incentives
link |
00:08:46.920
just clearly aren't designed for helping us get to the truth.
link |
00:08:49.800
It's kind of interesting that it's not just
link |
00:08:51.600
the length of the podcast medium,
link |
00:08:53.760
but it's the fact that it was started by people
link |
00:08:56.960
that don't give a damn about, quote unquote, demand.
link |
00:09:00.680
There's a relaxed sort of the style like that Joe Rogan does.
link |
00:09:08.120
There's a freedom to express ideas
link |
00:09:12.840
in an unconstrained way that's very real.
link |
00:09:15.360
It's kind of funny in that it feels
link |
00:09:18.760
so refreshingly real to us today.
link |
00:09:22.160
And I wonder what the future looks like.
link |
00:09:24.960
It's a little bit sad now that quite a lot
link |
00:09:27.480
of sort of more popular people are getting into podcasting.
link |
00:09:31.680
And they try to sort of create,
link |
00:09:36.040
they try to control it,
link |
00:09:37.400
they try to constrain it in different kinds of ways.
link |
00:09:40.280
People I love like Conan Obron and so on,
link |
00:09:42.160
different comedians.
link |
00:09:43.440
And I'd love to see where the real aspects
link |
00:09:48.240
of this podcasting medium persists,
link |
00:09:50.640
maybe in TV, maybe in YouTube,
link |
00:09:52.560
maybe Netflix is pushing those kind of ideas.
link |
00:09:55.640
And it's kind of, it's a really exciting word,
link |
00:09:58.440
that kind of sharing of knowledge.
link |
00:10:00.280
Yeah, I mean, I think it's a double edged sword
link |
00:10:02.200
as it becomes more popular and more profitable where
link |
00:10:05.320
on the one hand you'll get a lot more creativity,
link |
00:10:08.440
people doing more interesting things with the medium,
link |
00:10:10.720
but also perhaps you get this place to the bottom
link |
00:10:12.720
where suddenly maybe it'll be hard to find good content
link |
00:10:16.920
on podcasts because it'll be so overwhelmed
link |
00:10:21.080
by the latest bit of vital outage.
link |
00:10:24.360
So speaking of that, jumping on effective altruism
link |
00:10:27.280
for a second, so much of that internet content
link |
00:10:33.800
is funded by advertisements.
link |
00:10:36.240
Just in the context of effective altruism,
link |
00:10:39.840
we're talking about the richest companies in the world,
link |
00:10:44.160
they're funded by advertisements essentially,
link |
00:10:45.800
Google, that's their primary source of income.
link |
00:10:48.840
Do you see that as, do you have any criticism
link |
00:10:53.360
of that source of income?
link |
00:10:55.200
Do you see that source of money
link |
00:10:57.520
as a potentially powerful source of money
link |
00:10:59.480
that could be used, well, certainly could be used for good,
link |
00:11:03.200
but is there something bad about that source of money?
link |
00:11:05.920
I think there's significant worries with it
link |
00:11:08.080
where it means that the incentives of the company
link |
00:11:13.200
might be quite misaligned with,
link |
00:11:15.400
are making people's lives better,
link |
00:11:20.520
where again, perhaps the incentives
link |
00:11:25.200
are towards increasing drama and debate
link |
00:11:29.000
on your social news, social media feed
link |
00:11:32.280
in order that more people are going to be engaged,
link |
00:11:36.320
perhaps kind of compulsively involved with the platform,
link |
00:11:41.320
whereas there are other business models
link |
00:11:44.840
like having an opt in subscription service,
link |
00:11:48.360
where perhaps they have other issues,
link |
00:11:50.760
but there's much more of an incentive
link |
00:11:54.800
to provide a product that its users are just
link |
00:11:58.440
really wanting, because now I'm paying for this product,
link |
00:12:02.160
I'm paying for this thing that I wanna buy,
link |
00:12:04.440
rather than I'm trying to use this thing
link |
00:12:08.440
and it's gonna get a profit mechanism
link |
00:12:11.560
that is somewhat orthogonal to me,
link |
00:12:13.560
actually just wanting to use the product.
link |
00:12:19.000
And so, I mean, in some cases,
link |
00:12:21.840
it'll work better than others.
link |
00:12:23.000
I can imagine, I can in theory imagine Facebook
link |
00:12:27.040
having a subscription service,
link |
00:12:28.800
but I think it's unlikely to happen anytime soon.
link |
00:12:32.280
Well, it's interesting, it's weird,
link |
00:12:34.240
now that you bring it up that it's unlikely.
link |
00:12:36.240
This example, I pay, I think 10 bucks a month
link |
00:12:38.680
for YouTube Red, and that's,
link |
00:12:43.280
and I don't think I get it much for that,
link |
00:12:45.320
except just, so no ads,
link |
00:12:50.200
but in general, it's just a slightly better experience.
link |
00:12:52.880
And I would gladly, now I'm not wealthy in fact,
link |
00:12:56.480
I'm operating very close to zero dollars,
link |
00:12:59.160
but I would pay 10 bucks a month to Facebook
link |
00:13:01.840
and 10 bucks a month to Twitter
link |
00:13:03.920
for some kind of more control
link |
00:13:07.480
in terms of advertisements and so on.
link |
00:13:09.120
But the other aspect of that is data, personal data.
link |
00:13:13.720
People are really sensitive about this.
link |
00:13:16.240
And I as one who hopes to one day create a company
link |
00:13:21.600
that may use people's data to do good for the world,
link |
00:13:27.520
wonder about this,
link |
00:13:28.960
won the psychology of why people are so paranoid.
link |
00:13:32.400
Well, I understand why, but they seem to be more paranoid
link |
00:13:35.240
than is justified at times.
link |
00:13:37.720
And the other is how do you do it right?
link |
00:13:39.480
So it seems that Facebook is,
link |
00:13:43.520
it seems that Facebook is doing it wrong.
link |
00:13:47.400
That's certainly the popular narrative.
link |
00:13:49.560
It's unclear to me actually how wrong,
link |
00:13:53.040
like I tend to give them more benefit of the doubt
link |
00:13:55.440
because they're, you know,
link |
00:13:57.360
it's a really hard thing to do right.
link |
00:14:00.040
And people don't necessarily realize it,
link |
00:14:01.400
but how do we respect in your view people's privacy?
link |
00:14:06.000
Yeah.
link |
00:14:06.840
I mean, in the case of how worried are people
link |
00:14:10.800
about using their data?
link |
00:14:12.400
I mean, there's a lot of public debate
link |
00:14:15.280
and criticism about it.
link |
00:14:18.680
When we look at people's revealed preferences,
link |
00:14:21.720
you know, people's continuing massive use
link |
00:14:24.320
of these sorts of services,
link |
00:14:27.680
it's not clear to me how much people really do care.
link |
00:14:30.560
Perhaps they care a bit,
link |
00:14:31.520
but they're happy to in effect kind of sell their data
link |
00:14:35.560
in order to be able to use a certain service.
link |
00:14:37.600
That's a great term, revealed preferences.
link |
00:14:39.360
So these aren't preferences,
link |
00:14:40.560
you're self report in the survey,
link |
00:14:42.600
this is like your actions speak.
link |
00:14:44.600
Yeah, exactly.
link |
00:14:45.440
So you might say, oh yeah, I hate the idea
link |
00:14:48.000
of Facebook having my data,
link |
00:14:51.040
but then when it comes to it,
link |
00:14:52.800
you actually are willing to give that data
link |
00:14:55.160
in exchange for being able to use the service.
link |
00:14:59.000
And if that's the case,
link |
00:15:01.640
then I think unless we have some explanation
link |
00:15:05.400
about why there's some negative externality from that
link |
00:15:11.120
or why there's some coordination failure,
link |
00:15:15.920
or if there's something that consumers
link |
00:15:18.080
are just really misled about
link |
00:15:19.760
where they don't realize why giving away data
link |
00:15:22.680
like this is a really bad thing to do,
link |
00:15:25.360
then ultimately I kind of want to respect
link |
00:15:31.520
people's preferences,
link |
00:15:32.360
they can give away their data if they want.
link |
00:15:35.520
I think there's a big difference
link |
00:15:36.520
between companies use of data and governments having data
link |
00:15:41.960
where looking at the record of history,
link |
00:15:45.840
governments knowing a lot about their people
link |
00:15:50.400
can be very bad if the government chooses to do
link |
00:15:54.160
bad things with it.
link |
00:15:55.000
And that's more worrying, I think.
link |
00:15:57.120
So let's jump into it a little bit.
link |
00:15:59.720
Most people know, but actually I two years ago
link |
00:16:03.920
had no idea what effective altruism was
link |
00:16:07.040
until I saw there was a cool looking event
link |
00:16:09.120
in an MIT group here.
link |
00:16:11.280
They, I think it's called the effective altruism club
link |
00:16:15.960
or a group.
link |
00:16:17.960
I was like, what the heck is that?
link |
00:16:19.880
Yeah.
link |
00:16:21.440
And one of my friends said,
link |
00:16:23.240
I mean, he said that they're just
link |
00:16:27.200
a bunch of eccentric characters.
link |
00:16:30.000
So I was like, hell yes, I'm in.
link |
00:16:31.600
So I went to one of their events
link |
00:16:32.800
and looked up what's it about.
link |
00:16:34.360
This is quite a fascinating philosophical
link |
00:16:37.040
and just a movement of ideas.
link |
00:16:38.880
So can you tell me what is effective altruism?
link |
00:16:42.600
Great.
link |
00:16:43.440
So the core of effective altruism
link |
00:16:44.800
is about trying to answer this question,
link |
00:16:46.480
which is how can I do as much good as possible
link |
00:16:49.360
with my scarce resources, my time and with my money?
link |
00:16:53.200
And then once we have our best guess answers to that,
link |
00:16:57.120
trying to take those ideas and put that into practice
link |
00:17:00.120
and do those things that we believe will do the most good.
link |
00:17:03.000
And we're now a community of people,
link |
00:17:06.040
many thousands of us around the world
link |
00:17:08.040
who really are trying to answer that question as best we can
link |
00:17:11.480
and then use our time and money to make the world better.
link |
00:17:15.200
So what's the difference between
link |
00:17:17.200
sort of classical general idea of altruism
link |
00:17:22.200
and effective altruism?
link |
00:17:24.600
So normally when people decide to do good,
link |
00:17:28.240
they often just aren't so reflective about those attempts.
link |
00:17:34.040
So someone might approach you on the street
link |
00:17:36.200
asking you to give to charity.
link |
00:17:38.480
And if you're feeling altruistic,
link |
00:17:42.080
you'll give to the person on the street.
link |
00:17:44.360
Or if you think, oh, I wanna do some good in my life,
link |
00:17:47.960
you might volunteer at a local place
link |
00:17:49.920
or perhaps you'll decide pursue a career
link |
00:17:52.800
where you're working in a field
link |
00:17:56.400
that's kind of more obviously beneficial
link |
00:17:58.120
like being a doctor or a nurse or a healthcare professional.
link |
00:18:03.880
But it's very rare that people apply the same level
link |
00:18:07.840
of rigor and analytical thinking
link |
00:18:11.760
to lots of other areas we think about.
link |
00:18:14.360
So take the case of someone approaching you on the street.
link |
00:18:16.400
Imagine if that person instead was saying,
link |
00:18:18.680
hey, I've got this amazing company,
link |
00:18:20.160
do you want to invest in it?
link |
00:18:22.320
It would be insane for, no one would ever think,
link |
00:18:24.840
oh, of course, I'm just a company,
link |
00:18:26.360
like you'd think it was a scam.
link |
00:18:29.160
But somehow we don't have that same level of rigor
link |
00:18:31.280
when it comes to doing good,
link |
00:18:32.320
even though the stakes are more important
link |
00:18:34.560
when it comes to trying to help others
link |
00:18:36.000
than trying to make money for ourselves.
link |
00:18:38.800
First of all, so there is a psychology
link |
00:18:40.600
at the individual level of doing good just feels good.
link |
00:18:44.800
And so in some sense, on that pure psychological part,
link |
00:18:51.720
it doesn't matter.
link |
00:18:52.960
In fact, you don't want to know if it does good or not
link |
00:18:56.480
because most of the time it won't.
link |
00:19:01.640
So like in a certain sense,
link |
00:19:04.920
it's understandable why altruism
link |
00:19:06.960
without the effective part is so appealing
link |
00:19:09.920
to a certain population.
link |
00:19:11.400
By the way, let's zoom out for a second.
link |
00:19:15.440
Do you think most people, two questions,
link |
00:19:18.840
do you think most people are good?
link |
00:19:21.040
Question number two is,
link |
00:19:22.360
do you think most people want to do good?
link |
00:19:25.000
So are most people good?
link |
00:19:26.720
I think it's just super dependent
link |
00:19:28.080
on the circumstances that someone is in.
link |
00:19:31.760
I think that the actions people take
link |
00:19:34.880
and their moral worth is just much more dependent
link |
00:19:37.760
on circumstance than it is on someone's
link |
00:19:40.800
intrinsic character.
link |
00:19:41.960
So is it evil within all of us?
link |
00:19:43.840
It seems like the better angels of our nature,
link |
00:19:47.920
there's a tendency of us as a society
link |
00:19:50.400
to tend towards good, less war,
link |
00:19:53.280
I mean with all these metrics.
link |
00:19:55.600
What is that us becoming who we want to be?
link |
00:20:00.080
Or is that some kind of societal force?
link |
00:20:03.240
What's the nature versus nurture thing here?
link |
00:20:05.240
Yeah, so in that case, I just think, yeah,
link |
00:20:07.280
so violence has massively declined over time.
link |
00:20:10.560
I think that's a slow process of cultural evolution,
link |
00:20:14.160
institutional evolution,
link |
00:20:15.440
such that now the incentives for you and I
link |
00:20:18.760
to be violent are very, very small indeed.
link |
00:20:21.720
In contrast, when we were hunter gatherers,
link |
00:20:23.680
the incentives were quite large.
link |
00:20:25.840
If there was someone who was potentially disturbing
link |
00:20:31.960
the social order and hunter gatherer setting,
link |
00:20:35.320
there was a very strong incentive to kill that person
link |
00:20:37.840
and people did.
link |
00:20:38.680
And it was just regarded 10% of deaths
link |
00:20:41.440
among hunter gatherers were murders.
link |
00:20:44.840
After hunter gatherers, when you have actual societies
link |
00:20:48.720
is when violence can probably go up
link |
00:20:51.360
because there's more incentive to do mass violence, right?
link |
00:20:54.320
To take over, conquer other people's lands
link |
00:20:58.840
and murder everybody in place and so on.
link |
00:21:01.240
Yeah, I mean, I think total death rate
link |
00:21:03.840
from human causes does go down,
link |
00:21:07.000
but you're like that if you're in a hunter gatherer situation,
link |
00:21:10.480
you're kind of a group that you're part of is very small,
link |
00:21:15.040
then you can't have massive wars
link |
00:21:17.360
that just massive communities don't exist.
link |
00:21:19.640
But anyway, the second question,
link |
00:21:21.360
do you think most people want to do good?
link |
00:21:23.400
Yeah, and then I think that is true for most people.
link |
00:21:26.160
I think you see that with the fact that,
link |
00:21:29.960
most people donate a large proportion of people volunteer.
link |
00:21:33.840
If you give people opportunities
link |
00:21:35.560
to easily help other people, they will take it.
link |
00:21:38.760
But at the same time where a product of our circumstances
link |
00:21:43.800
and if it were more socially rewarded to be doing more good,
link |
00:21:47.440
if it were more socially rewarded to do good effectively,
link |
00:21:49.640
rather than not effectively,
link |
00:21:51.360
then we would see that behavior a lot more.
link |
00:21:55.120
So why should we do good?
link |
00:21:58.760
Yeah, my answer to this is,
link |
00:22:01.440
there's no kind of deeper level of explanation.
link |
00:22:04.120
So my answer to kind of why should you do good is,
link |
00:22:08.560
well, there is someone whose life is on the line,
link |
00:22:11.320
for example, whose life you can save
link |
00:22:15.000
via donating just actually a few thousand dollars
link |
00:22:17.960
to an effective nonprofit,
link |
00:22:20.160
like the Against Malaria Foundation.
link |
00:22:21.960
That is a sufficient reason to do good.
link |
00:22:24.120
And then if you ask, well, why ought I to do that?
link |
00:22:27.240
I'm like, I just show you the same facts again.
link |
00:22:29.920
It's that fact that is the reason to do good.
link |
00:22:32.240
There's nothing more fundamental than that.
link |
00:22:34.840
I'd like to sort of make more concrete
link |
00:22:38.360
the thing we're trying to make better.
link |
00:22:41.200
So you just mentioned malaria.
link |
00:22:43.240
So there's a huge amount of suffering in the world.
link |
00:22:46.840
Are we trying to remove,
link |
00:22:50.200
so ultimately the goal, not ultimately,
link |
00:22:53.640
but the first step is to remove the worst of the suffering.
link |
00:22:59.200
So there's some kind of threshold of suffering
link |
00:23:01.760
that we want to make sure does not exist in the world.
link |
00:23:06.640
Or do we really naturally want to take a much further step
link |
00:23:11.280
and look at things like income inequality.
link |
00:23:14.840
So not just getting everybody above a certain threshold,
link |
00:23:17.200
but making sure that there's some,
link |
00:23:21.680
that broadly speaking,
link |
00:23:23.800
there's less injustice in the world, unfairness.
link |
00:23:27.560
In some definition, of course,
link |
00:23:29.320
very difficult to define a fairness.
link |
00:23:31.360
Yeah.
link |
00:23:32.200
So the metric I use is how many people do we affect
link |
00:23:35.680
and by how much do we affect them?
link |
00:23:37.480
And so that can, often that means eliminating suffering,
link |
00:23:43.360
but it doesn't have to,
link |
00:23:44.360
could be helping promote a flourishing life instead.
link |
00:23:47.960
And so if I was comparing reducing income inequality
link |
00:23:53.160
or getting people from the very pits of suffering
link |
00:23:58.480
to a higher level,
link |
00:23:59.880
the question I would ask is just a quantitative one
link |
00:24:03.240
of just if I do this first thing or the second thing,
link |
00:24:06.320
how many people am I going to benefit
link |
00:24:08.200
and by how much am I going to benefit?
link |
00:24:10.120
Am I going to move that one person from kind of 10%,
link |
00:24:13.480
0% well being to 10% well being?
link |
00:24:17.320
Perhaps that's just not as good as moving 100 people
link |
00:24:20.360
from 10% well being to 50% well being.
link |
00:24:22.880
And the idea is the diminishing returns
link |
00:24:25.960
is the idea of when you're in terrible poverty,
link |
00:24:33.000
then the $1 that you give goes much further
link |
00:24:38.400
than if you were in the middle class
link |
00:24:40.160
in the United States, for example.
link |
00:24:41.840
Absolutely.
link |
00:24:42.680
And this fact is really striking.
link |
00:24:44.640
So if you take even just quite a conservative estimate
link |
00:24:49.640
of how we are able to turn money into well being,
link |
00:24:55.720
the economists put it as like a log curve.
link |
00:24:59.160
That's all steeper,
link |
00:25:00.880
but that means that any proportional increase in your income
link |
00:25:05.920
has the same impact on your well being.
link |
00:25:08.200
And so someone moving from $1,000 a year to $2,000 a year
link |
00:25:12.200
has the same impact to someone moving from $100,000 a year
link |
00:25:17.200
to $200,000 a year.
link |
00:25:20.680
And then when you combine that with the fact
link |
00:25:22.320
that we in middle class members of rich countries
link |
00:25:27.240
are 100 times richer than financial terms
link |
00:25:29.760
in the global poor,
link |
00:25:31.160
that means we can do 100 times to benefit
link |
00:25:33.080
the poorest people in the world
link |
00:25:34.520
as we can to benefit people of our income level.
link |
00:25:37.600
And that's this astonishing fact.
link |
00:25:39.400
Yeah, it's quite incredible.
link |
00:25:41.120
A lot of these facts and ideas are just
link |
00:25:43.760
difficult to think about
link |
00:25:47.640
because there's an overwhelming amount of suffering
link |
00:25:53.800
in the world and even acknowledging it is difficult.
link |
00:26:00.680
I'm not exactly sure why that is.
link |
00:26:02.320
I mean, I mean, it's difficult
link |
00:26:05.320
because you have to bring to mind,
link |
00:26:07.680
you know, it's an unpleasant experience
link |
00:26:09.640
thinking about other people suffering.
link |
00:26:11.720
It's unpleasant to be empathizing with it, firstly.
link |
00:26:14.760
And then secondly, thinking about it
link |
00:26:16.360
means that maybe we'd have to change our lifestyles.
link |
00:26:19.040
And if you're very attached to the income that you've got,
link |
00:26:22.920
perhaps you don't want to be confronting ideas
link |
00:26:25.920
or arguments that might cause you
link |
00:26:28.560
to use some of that money to help others.
link |
00:26:31.480
So it's quite understandable in the psychological terms,
link |
00:26:34.720
even if it's not the right thing that we ought to be doing.
link |
00:26:38.200
So how can we do better?
link |
00:26:40.160
How can we be more effective?
link |
00:26:42.480
How does data help?
link |
00:26:44.760
In general, how can we do better?
link |
00:26:47.560
It's definitely hard.
link |
00:26:48.840
And we have spent the last 10 years engaged
link |
00:26:52.240
in kind of some deep research projects
link |
00:26:54.800
to try and answer kind of two questions.
link |
00:26:59.480
One is of all the many problems the world is facing,
link |
00:27:02.560
what are the problems we ought to be focused on?
link |
00:27:04.720
And then within those problems that we judge
link |
00:27:06.840
to be kind of the most pressing
link |
00:27:08.600
where we use this idea of focusing on problems
link |
00:27:11.280
that are the biggest in scale, that are the most tractable,
link |
00:27:15.640
where we can do have the kind of make the most progress
link |
00:27:20.240
on that problem, and that are the most neglected.
link |
00:27:23.800
Within them, what are the things that
link |
00:27:26.480
have the kind of best evidence, or we
link |
00:27:29.120
have the best guess that will do the most good?
link |
00:27:32.040
And so we have a bunch of organizations.
link |
00:27:34.480
So GiveWell, for example, is focused
link |
00:27:37.720
on global health and development,
link |
00:27:39.320
and has a list of seven top recommended charities.
link |
00:27:42.360
So the idea in general, and sorry to interrupt,
link |
00:27:44.640
is so we'll talk about sort of poverty and animal welfare
link |
00:27:47.680
and existential risk.
link |
00:27:48.640
There's all fascinating topics.
link |
00:27:49.920
But in general, the idea is there should be a group.
link |
00:27:56.320
Sorry, there's a lot of groups that
link |
00:27:59.160
seek to convert money into good.
link |
00:28:04.240
And then you also, on top of that, want to have a counting
link |
00:28:11.640
of how good they actually perform that conversion,
link |
00:28:16.000
how well they did in converting money to good.
link |
00:28:18.440
So ranking of these different groups,
link |
00:28:20.480
ranking these charities.
link |
00:28:24.080
So does that apply across basically all aspects
link |
00:28:28.520
of effective altruism?
link |
00:28:29.680
So there should be a group of people,
link |
00:28:31.840
and they should report on certain metrics
link |
00:28:34.560
of how well they've done.
link |
00:28:35.760
And you should only give your money to groups
link |
00:28:38.440
that do a good job.
link |
00:28:40.000
That's the core idea.
link |
00:28:42.360
I'd make two comments.
link |
00:28:43.600
One is just it's not just about money.
link |
00:28:45.400
So we're also trying to encourage people
link |
00:28:48.400
to work in areas where they'll have the biggest impact.
link |
00:28:51.400
Absolutely.
link |
00:28:52.040
And in some areas, they're really people heavy, but money poor.
link |
00:28:56.480
Other areas are kind of money rich and people poor.
link |
00:28:59.800
And so whether it's better to focus time or money
link |
00:29:02.960
depends on the cause area.
link |
00:29:05.360
And then the second is that you mentioned metrics.
link |
00:29:08.400
And while that's the ideal, and in some areas,
link |
00:29:11.440
we are able to get somewhat quantitative information
link |
00:29:15.200
about how much impact an area is having,
link |
00:29:19.040
that's not always true for some of the issues,
link |
00:29:21.800
like you mentioned, existential risks.
link |
00:29:23.920
Well, we're not able to measure in any sort of precise way
link |
00:29:30.560
like how much progress we're making.
link |
00:29:32.520
And so you have to instead fall back
link |
00:29:35.120
on just a regular argument and evaluation,
link |
00:29:38.640
even in the absence of data.
link |
00:29:41.160
So let's first sort of linger on your own story for a second.
link |
00:29:47.520
How do you yourself practice effective altruism
link |
00:29:50.360
in your own life?
link |
00:29:51.200
Because I think that's a really interesting place to start.
link |
00:29:54.720
So I've tried to build effective altruism
link |
00:29:56.960
into at least many components of my life.
link |
00:30:00.240
So on the donation side, my plan is
link |
00:30:03.640
to give away most of my income over the course of my life.
link |
00:30:07.560
I've set a bar I feel happy with,
link |
00:30:09.440
and I just donate above that bar.
link |
00:30:12.480
So at the moment, I donate about 20% of my income.
link |
00:30:17.960
Then on the career side, I've also
link |
00:30:20.280
shifted kind of what I do, where I was initially
link |
00:30:24.320
planning to work on very esoteric topics
link |
00:30:28.520
in the philosophy of logic, philosophy of language,
link |
00:30:30.880
things that are intellectually extremely interesting,
link |
00:30:33.040
but the path by which they really
link |
00:30:35.280
make a difference to the world is, let's just say,
link |
00:30:38.160
it's very unclear at best.
link |
00:30:40.600
And so I switched instead to the searching ethics
link |
00:30:43.400
to actually just working on this question of how we can do
link |
00:30:46.480
as much good as possible.
link |
00:30:48.520
And then I've also spent a very large chunk of my life
link |
00:30:51.680
over the last 10 years creating a number of nonprofits
link |
00:30:55.320
who, again, in different ways, are tackling
link |
00:30:58.040
this question of how we can do the most good
link |
00:31:00.160
and helping them to grow over time too.
link |
00:31:02.120
Yeah, we'll mention a few of them with the career selection,
link |
00:31:05.440
80,000 hours.
link |
00:31:07.640
80,000 hours is a really interesting group.
link |
00:31:11.200
So maybe also just a quick pause on the origins
link |
00:31:16.800
of effective altruism, because you painted a picture
link |
00:31:19.480
who the key figures are, including yourself,
link |
00:31:23.120
in the effective altruism movement today.
link |
00:31:26.920
Yeah, there are two main strands that
link |
00:31:30.360
kind of came together to form the effective altruism movement.
link |
00:31:34.920
So one was two philosophers, myself and Toby Ord at Oxford.
link |
00:31:40.480
And we had been very influenced by the work of Peter Singer,
link |
00:31:44.000
an Australian model philosopher, who
link |
00:31:45.920
had argued for many decades that because one can do so much good
link |
00:31:50.240
at such a little cost to oneself,
link |
00:31:52.960
we have an obligation to give away most of our income,
link |
00:31:55.640
to benefit those who are actually in poverty,
link |
00:31:58.280
just in the same way that we have an obligation
link |
00:32:00.960
to run in and save a child from a drowning in a shallow pond
link |
00:32:04.800
if it were just to ruin your suit that
link |
00:32:06.560
cost a few thousand dollars.
link |
00:32:10.400
And we set up Giving What We Can in 2009,
link |
00:32:13.200
which is encouraging people to give at least 10% of their income
link |
00:32:16.040
to the most effective charities.
link |
00:32:18.200
And the second main strand was the formation of Give Well,
link |
00:32:21.400
which was originally based in New York and started in about 2007.
link |
00:32:26.360
And that was set up by Holden Karnosi and Ellie Hassenfeld,
link |
00:32:30.280
who were two hedge fund dudes who were making good money
link |
00:32:36.280
and thinking, well, where should I donate?
link |
00:32:38.440
And in the same way as if they wanted
link |
00:32:40.640
to buy a product for themselves, they
link |
00:32:42.200
would look at Amazon reviews.
link |
00:32:44.160
They were like, well, what are the best charities?
link |
00:32:46.600
Found they just weren't really good answers to that question,
link |
00:32:49.280
certainly not that they were satisfied with.
link |
00:32:51.240
And so they formed Give Well in order
link |
00:32:52.800
to try and work out what are those charities where they can
link |
00:32:57.520
have the biggest impact.
link |
00:32:59.040
And then from there and some other influences,
link |
00:33:02.280
kind of community glue and spread.
link |
00:33:05.200
Can we explore the philosophical and political space
link |
00:33:08.640
that effective altruism occupies a little bit?
link |
00:33:11.440
So from the little and distant in my own lifetime
link |
00:33:16.600
that I've read of Ayn Rand's work,
link |
00:33:18.640
Ayn Rand's philosophy of Objectivism, espouses.
link |
00:33:22.080
And it's interesting to put her philosophy in contrast
link |
00:33:26.760
with effective altruism.
link |
00:33:28.040
So it espouses selfishness as the best thing you can do.
link |
00:33:32.760
And it's not actually against altruism.
link |
00:33:37.600
It's just you have that choice, but you
link |
00:33:40.480
should be selfish in it, or not.
link |
00:33:43.680
Maybe you can disagree here.
link |
00:33:44.760
But so it can be viewed as the complete opposite
link |
00:33:48.280
of effective altruism, or it can be viewed as similar
link |
00:33:51.760
because the word effective is really interesting.
link |
00:33:55.520
Because if you want to do good, then you should be damn good
link |
00:34:00.600
at doing good.
link |
00:34:03.520
I think that would fit within the morality that's
link |
00:34:06.960
defined by Objectivism.
link |
00:34:08.640
So do you see a connection between these two philosophies
link |
00:34:11.120
and other, perhaps, other in this complicated space
link |
00:34:16.400
of beliefs that effective altruism is positioned as opposing
link |
00:34:22.840
or aligned with?
link |
00:34:24.800
I would definitely say that Objectivism Ayn Rand's
link |
00:34:27.160
philosophy is a philosophy that's quite fundamentally
link |
00:34:31.080
opposed to effective altruism in so far as Ayn Rand's philosophy
link |
00:34:37.040
is about championing egoism and saying
link |
00:34:39.200
that I'm never quite sure whether the philosophy is
link |
00:34:41.600
meant to say that just you ought to do whatever will best
link |
00:34:46.360
benefit yourself as ethical egoism,
link |
00:34:48.680
no matter what the consequences are.
link |
00:34:50.760
Or second, if there's this alternative view, which is,
link |
00:34:54.960
well, you ought to try and benefit yourself
link |
00:34:57.560
because that's actually the best way of benefiting society.
link |
00:35:02.960
Certainly, Atlas Shilaguchi is presenting her philosophy
link |
00:35:07.560
as a way that's actually going to bring
link |
00:35:09.800
about a flourishing society.
link |
00:35:12.080
And if it's the former, then well, effective altruism
link |
00:35:15.200
is all about promoting the idea of altruism.
link |
00:35:17.120
So it's saying, in fact, we ought to really be trying to help
link |
00:35:21.000
others as much as possible so it's opposed there.
link |
00:35:23.920
And then on the second side, I would just dispute
link |
00:35:27.800
the empirical premise.
link |
00:35:28.720
It would seem, given the major problems in the world today,
link |
00:35:31.480
it would seem like this remarkable coincidence,
link |
00:35:34.160
quite suspicious, one might say, if benefiting myself
link |
00:35:37.440
was actually the best way to bring about a better world.
link |
00:35:41.040
So in that point, and I think that connects also
link |
00:35:44.120
with career selection that we'll talk about,
link |
00:35:48.080
but let's consider not objectives, but capitalism.
link |
00:35:53.080
So, and the idea that you focusing on the thing
link |
00:35:56.840
that you damn are damn good at, whatever that is,
link |
00:36:02.400
may be the best thing for the world.
link |
00:36:05.720
Sort of part of it is also mindset, right?
link |
00:36:08.600
Sort of like the thing I love is robots.
link |
00:36:13.080
So maybe I should focus on building robots
link |
00:36:17.400
and never even think about the idea
link |
00:36:19.800
of effective altruism, which is kind
link |
00:36:23.160
of the capitalist notion.
link |
00:36:25.000
Is there any value in that idea and just finding
link |
00:36:27.400
the thing you're good at
link |
00:36:28.520
and maximizing your productivity in this world
link |
00:36:31.520
and thereby sort of lifting all boats
link |
00:36:34.960
and benefiting society as a result?
link |
00:36:38.640
Yeah, I think there's two things I'd wanna say on that.
link |
00:36:41.000
So one is what your comparative advantages,
link |
00:36:43.560
what your strengths are when it comes to career.
link |
00:36:45.400
There's obviously super important
link |
00:36:46.840
because there's lots of career paths I would be terrible at
link |
00:36:50.720
if I thought being an artist was the best thing one could do.
link |
00:36:53.840
Well, I'd be doomed, just really quite astonishingly bad.
link |
00:36:59.320
And so I do think, at least within the realm
link |
00:37:01.680
of things that could plausibly be very high impact,
link |
00:37:05.760
choose the thing that you think you're gonna be able
link |
00:37:08.360
to really be passionate at and excel at
link |
00:37:12.400
kind of over the long term.
link |
00:37:15.120
Then on this question of like, should one just do that
link |
00:37:17.960
in an unrestricted way and not even think
link |
00:37:19.680
about what the most important problems are?
link |
00:37:22.280
I do think that in a kind of perfectly designed society,
link |
00:37:26.600
that might well be the case.
link |
00:37:27.840
That would be a society where we've corrected
link |
00:37:29.960
all market failures, we've internalized all externalities
link |
00:37:34.760
and then we've managed to set up incentives
link |
00:37:37.000
such that people just pursuing their own strengths
link |
00:37:41.720
is the best way of doing good,
link |
00:37:44.120
but we're very far from that society.
link |
00:37:46.200
So if one did that, then it'd be very unlikely
link |
00:37:51.200
that you would focus on improving the lives
link |
00:37:55.000
of non human animals that aren't participating in markets
link |
00:37:57.880
or ensuring the long run future goes well,
link |
00:38:00.000
where future people certainly aren't participating
link |
00:38:02.480
in markets or benefiting the global poor
link |
00:38:05.360
who do participate but have so much less kind of power
link |
00:38:09.680
from a starting perspective that their views
link |
00:38:12.120
aren't accurately kind of represented by market forces too.
link |
00:38:17.120
Got it, so yeah, and sort of pure definition capitalism
link |
00:38:21.120
just may very well ignore the people
link |
00:38:24.120
that are suffering the most, the white swath of them.
link |
00:38:27.120
So if you could allow me this line of thinking here,
link |
00:38:33.720
so I've listened to a lot of your conversations online.
link |
00:38:37.120
I find, if I can compliment you,
link |
00:38:42.360
they're very interesting conversations.
link |
00:38:44.360
Your conversation on Rogan, on Joe Rogan
link |
00:38:48.560
was really interesting with Sam Harris and so on, whatever.
link |
00:38:55.640
There's a lot of stuff that's really good out there.
link |
00:38:57.920
And yet when I look at the internet,
link |
00:39:00.240
I look at YouTube, which has certain mobs,
link |
00:39:04.240
certain swaths of right leaning folks
link |
00:39:08.280
whom I dearly love, I love all people.
link |
00:39:13.280
All, especially people with ideas.
link |
00:39:19.000
They seem to not like you very much.
link |
00:39:22.680
So I don't understand why exactly.
link |
00:39:26.240
So my own sort of hypothesis is there is a right left divide
link |
00:39:31.240
that absurdly so caricatured in politics,
link |
00:39:36.120
at least in the United States.
link |
00:39:38.320
And maybe you're somehow pigeonholed into one of those sides
link |
00:39:42.720
and maybe that's what it is.
link |
00:39:46.600
Maybe your message is somehow politicized.
link |
00:39:49.560
Yeah, I mean.
link |
00:39:51.320
How do you make sense of that?
link |
00:39:52.240
Because you're extremely interesting.
link |
00:39:54.400
Like you got the comments I see on Joe Rogan,
link |
00:39:58.640
there's a bunch of negative stuff.
link |
00:40:00.360
And yet if you listen to it, the conversation is fascinating.
link |
00:40:03.200
I'm not speaking, I'm not some kind of lefty extremist,
link |
00:40:08.320
but just this fascinating conversation.
link |
00:40:10.120
So why are you getting some small amount of hate?
link |
00:40:13.760
So I'm actually pretty glad that effective altruism
link |
00:40:17.560
has managed to stay relatively unpoliticized
link |
00:40:22.160
because I think the core message
link |
00:40:24.000
to just use some of your time and money
link |
00:40:25.880
to do as much good as possible
link |
00:40:27.160
to fight some of the problems in the world
link |
00:40:29.000
can be appealing across the political spectrum.
link |
00:40:31.760
And we do have a diversity of political viewpoints
link |
00:40:35.000
among people who have engaged in effective altruism.
link |
00:40:37.720
We do, however, do get some criticism
link |
00:40:40.640
from the left and the right.
link |
00:40:42.720
Oh, interesting.
link |
00:40:43.560
What's the criticism?
link |
00:40:44.400
Both will be interesting to hear.
link |
00:40:45.840
Yeah, so criticism from the left
link |
00:40:47.800
is that we're not focused enough
link |
00:40:49.280
on dismantling the capitalist system
link |
00:40:52.520
that they see as the root of most of the problems
link |
00:40:55.600
that we're talking about.
link |
00:40:58.480
And there I kind of disagree on partly of the premise
link |
00:41:03.480
where I don't think relevant alternative systems
link |
00:41:08.480
would say to the animals or to the global poor
link |
00:41:11.560
or to the future generations, kind of much better.
link |
00:41:14.200
And then also the tactics where I think
link |
00:41:16.840
there are particular ways we can change society
link |
00:41:19.600
that would massively benefit,
link |
00:41:21.400
be massively beneficial on those things
link |
00:41:23.720
that don't go via dismantling the entire system
link |
00:41:27.680
which is perhaps a million times harder to do.
link |
00:41:30.920
Then criticism on the right,
link |
00:41:32.400
there's definitely like in the sponsor,
link |
00:41:34.400
the Joe Rogan podcast.
link |
00:41:36.400
There definitely were a number of A&L fans
link |
00:41:38.400
who weren't keen on the idea of promoting altruism.
link |
00:41:43.400
There was a remarkable set of ideas,
link |
00:41:46.400
just the idea that effective altruism,
link |
00:41:48.400
unmanly, I think, was driving a lot of criticism.
link |
00:41:53.400
Okay, so I love fighting.
link |
00:41:56.400
I've been in street fights my whole life.
link |
00:41:58.400
I'm as alpha in everything I do as it gets.
link |
00:42:03.400
And the fact that I and Joe Rogan said
link |
00:42:06.400
that I thought Scent of a Woman is a better movie
link |
00:42:09.400
than John Wick put me into this beta category
link |
00:42:14.400
amongst people who are basically saying that,
link |
00:42:19.400
yeah, unmanly or it's not tough,
link |
00:42:21.400
it's not some principled view of strength
link |
00:42:25.400
that is represented by it's possible.
link |
00:42:28.400
So actually, how do you think about this?
link |
00:42:30.400
Because to me, altruism, especially effective altruism,
link |
00:42:36.400
is, I don't know what the female version of that is,
link |
00:42:42.400
but on the male side, manliest fuck, if I may say so.
link |
00:42:46.400
So how do you think about that kind of criticism?
link |
00:42:51.400
I think people who would make that criticism
link |
00:42:53.400
are just occupying a state of mind
link |
00:42:56.400
that I think is just so different from my state of mind
link |
00:42:59.400
that I kind of struggle to maybe even understand it,
link |
00:43:02.400
where if something's manly or unmanly or feminine
link |
00:43:06.400
or unfeminine, I'm like, I don't care.
link |
00:43:08.400
Is it the right thing to do or the wrong thing to do?
link |
00:43:11.400
Let me put it not in terms of man or woman,
link |
00:43:14.400
because I don't think that's useful.
link |
00:43:16.400
But I think there's a notion of acting out of fear
link |
00:43:21.400
or as opposed to out of principle and strength.
link |
00:43:26.400
Yeah.
link |
00:43:27.400
So, okay, yeah.
link |
00:43:28.400
Here's something that I do feel as an intuition
link |
00:43:32.400
and that I think drives some people who do find
link |
00:43:35.400
kind of a land detective and so on as a philosophy,
link |
00:43:39.400
which is a kind of taking control of your own life
link |
00:43:43.400
and having power over how you're steering your life
link |
00:43:48.400
and not kind of toutowing to others,
link |
00:43:53.400
really thinking things through.
link |
00:43:55.400
I find that set of ideas just very compelling
link |
00:43:58.400
and inspirational.
link |
00:44:00.400
But I actually think of effective altruism
link |
00:44:02.400
as really that side of my personality.
link |
00:44:05.400
It's like, scratch that itch,
link |
00:44:07.400
where you are just not taking the kind of priorities
link |
00:44:11.400
that society is giving you as granted.
link |
00:44:14.400
Instead, you're choosing to act in accordance with
link |
00:44:18.400
the priorities that you think are most important in the world.
link |
00:44:22.400
And often that involves then doing quite unusual things
link |
00:44:29.400
from a societal perspective,
link |
00:44:30.400
like donating a large chunk of your earnings
link |
00:44:33.400
or working on these weird issues about AI
link |
00:44:36.400
and so on that other people might not understand.
link |
00:44:39.400
Yeah, I think that's a really gutsy thing to do.
link |
00:44:42.400
Just taking control at least at this stage.
link |
00:44:45.400
I mean, that's you taking ownership not of just yourself
link |
00:44:52.400
but your presence in this world that's full of suffering
link |
00:44:57.400
and saying as opposed to being paralyzed by that notion,
link |
00:45:00.400
it's taking control and saying I could do something.
link |
00:45:03.400
Yeah, exactly.
link |
00:45:04.400
I mean, that's really powerful.
link |
00:45:05.400
But the one thing I personally hate too about the left
link |
00:45:11.400
currently that I think those folks to detect
link |
00:45:14.400
is the social signaling.
link |
00:45:17.400
When you look at yourself sort of late at night,
link |
00:45:21.400
would you do everything you're doing
link |
00:45:23.400
in terms of effective altruism if your name,
link |
00:45:27.400
because you're quite popular,
link |
00:45:28.400
but if your name was totally unattached to it,
link |
00:45:31.400
if it was in secret?
link |
00:45:32.400
Yeah, I mean, I think I would.
link |
00:45:35.400
To be honest, I think the kind of popularity is like,
link |
00:45:39.400
you know, it's a mixed bag but there are serious costs
link |
00:45:43.400
and I don't particularly, I don't like love it.
link |
00:45:47.400
Like it means you get all these people calling you a cock
link |
00:45:49.400
on Joe Rogan.
link |
00:45:50.400
It's like not the most fun thing.
link |
00:45:52.400
But you also get a lot of sort of brownie points
link |
00:45:55.400
for doing good for the world.
link |
00:45:57.400
Yeah, you do.
link |
00:45:58.400
But I think my ideal life, I would be like in some library
link |
00:46:01.400
solving logic puzzles all day
link |
00:46:04.400
and I'd like really be like learning maths and so on.
link |
00:46:07.400
And have a good body of friends and so on.
link |
00:46:11.400
So your instinct for effective altruism is something deep.
link |
00:46:14.400
It's not one that is communicating socially.
link |
00:46:19.400
It's more in your heart you want to do good for the world.
link |
00:46:23.400
Yeah, I mean, so we can look back to early giving what we can.
link |
00:46:27.400
So, you know, we're setting this up for me and Toby.
link |
00:46:32.400
And I really thought that doing this would be a big hit
link |
00:46:36.400
in my academic career because I was now spending, you know,
link |
00:46:39.400
at that time more than half my time setting up this nonprofit
link |
00:46:42.400
at the crucial time when you should be like producing
link |
00:46:45.400
your best academic work and so on.
link |
00:46:47.400
And it was also the case at the time, it was kind of like
link |
00:46:51.400
the Toby Ord Club.
link |
00:46:53.400
You know, he was the most popular.
link |
00:46:55.400
There was this personal interest story around him
link |
00:46:57.400
and his plans to donate.
link |
00:46:59.400
Sorry to interrupt, but Toby was donating a large amount.
link |
00:47:02.400
Can you tell just briefly what he was doing?
link |
00:47:05.400
Yeah, so he made this public commitment to give everything here
link |
00:47:09.400
and above £20,000 per year to the most effective causes.
link |
00:47:14.400
And even as a graduate student, he was still donating
link |
00:47:17.400
about 15, 20% of his income, which is quite significant
link |
00:47:21.400
given that graduate students are not known for being super wealthy.
link |
00:47:24.400
That's right.
link |
00:47:25.400
And when we launched giving what we can,
link |
00:47:27.400
the media just loved this as like a personal interest story.
link |
00:47:31.400
So the story about him and his pledge was the most,
link |
00:47:36.400
yeah, it was actually the most popular news story of the day.
link |
00:47:40.400
And we kind of ran the same story a year later,
link |
00:47:42.400
and it was the most popular news story of the day
link |
00:47:44.400
a year later too.
link |
00:47:46.400
And so it really was kind of several years before
link |
00:47:52.400
then I was also kind of giving more talks
link |
00:47:54.400
and starting to do more writing,
link |
00:47:55.400
and then especially with, you know,
link |
00:47:57.400
I wrote this book, Doing Good Better,
link |
00:47:59.400
that then there started to be kind of attention and so on.
link |
00:48:03.400
But deep inside your own relationship with effective altruism
link |
00:48:07.400
was, I mean, it had nothing to do with the publicity.
link |
00:48:12.400
Did you see yourself, how did the publicity connect with it?
link |
00:48:17.400
Yeah, I mean, that's kind of what I'm saying
link |
00:48:19.400
is I think the publicity came like several years afterwards.
link |
00:48:22.400
I mean, at the early stage when we set up giving what we can,
link |
00:48:25.400
it was really just every person we get to pledge 10% is,
link |
00:48:29.400
you know, something like $100,000 over their lifetime.
link |
00:48:34.400
That's huge.
link |
00:48:35.400
And so it was just we had started with 23 members.
link |
00:48:38.400
Every single person was just this like kind of huge accomplishment.
link |
00:48:42.400
And at the time I just really thought, you know,
link |
00:48:45.400
maybe over time we'll have 100 members
link |
00:48:47.400
and that'll be like amazing.
link |
00:48:49.400
Whereas now we have, you know,
link |
00:48:51.400
over 4,000 and one and a half billion dollars pledged.
link |
00:48:53.400
That's just unimaginable to me at the time when I was first
link |
00:48:58.400
kind of getting this, you know, getting this stuff off the ground.
link |
00:49:01.400
So can we talk about poverty and the biggest problems
link |
00:49:09.400
that you think in the near term effective altruism
link |
00:49:13.400
can attack in each one.
link |
00:49:15.400
So poverty obviously is a huge one.
link |
00:49:18.400
Yeah.
link |
00:49:19.400
How can we help?
link |
00:49:21.400
Great. Yeah. So poverty absolutely this huge problem,
link |
00:49:24.400
700 million people in extreme poverty living in less than $2 per day
link |
00:49:29.400
where that's what that means is what $2 would buy in the US.
link |
00:49:35.400
So think about that.
link |
00:49:36.400
It's like some rice, maybe some beans.
link |
00:49:38.400
It's very, you know, really not much.
link |
00:49:41.400
And at the same time we can do an enormous amount
link |
00:49:44.400
to improve the lives of people in extreme poverty.
link |
00:49:47.400
So the things that we tend to focus on
link |
00:49:49.400
are interventions in global health.
link |
00:49:52.400
And that's for a couple of reasons.
link |
00:49:55.400
One is that global health just has this amazing track record.
link |
00:49:58.400
Life expectancy globally is up 50% relative to 60 or 70 years ago.
link |
00:50:03.400
We've eradicated smallpox, which killed 2 million lives every year,
link |
00:50:07.400
almost eradicated polio.
link |
00:50:09.400
Second is that we just have great data on what works
link |
00:50:13.400
when it comes to global health.
link |
00:50:15.400
So we just know that bed nets protect children
link |
00:50:19.400
and prevent them from dying from malaria.
link |
00:50:22.400
And then the third is just that it's extremely cost effective.
link |
00:50:26.400
So it costs $5 to buy one bed net,
link |
00:50:29.400
protects two children for two years against malaria.
link |
00:50:32.400
If you spend about $3,000 on bed nets,
link |
00:50:34.400
then statistically speaking you're going to save a child's life.
link |
00:50:38.400
And there are other interventions too.
link |
00:50:41.400
And so given the people in such suffering
link |
00:50:44.400
and we have this opportunity to, you know,
link |
00:50:48.400
do such huge good for such low cost, well, yeah, why not?
link |
00:50:52.400
So the individuals, so for me today,
link |
00:50:55.400
if I wanted to deal with the poverty, how would I help?
link |
00:51:00.400
And I wanted to say, I think donating 10% of your income
link |
00:51:04.400
is a very interesting idea or some percentage
link |
00:51:06.400
or some setting a bar instead of sticking to it.
link |
00:51:10.400
So how do we then take the step towards the effective part?
link |
00:51:15.400
So you've conveyed some notions, but who do you give the money to?
link |
00:51:20.400
Yeah, so Give Well, this organization I mentioned is...
link |
00:51:24.400
Give Well.
link |
00:51:25.400
Well, it makes charity recommendations
link |
00:51:27.400
and some of its top recommendations.
link |
00:51:29.400
So Against Malaria Foundation is this organization
link |
00:51:32.400
that buys and distributes these insecticide seeded bed nets.
link |
00:51:37.400
And then it has a total of seven charities
link |
00:51:40.400
that it recommends very highly.
link |
00:51:42.400
So that recommendation, is it almost like a star of approval?
link |
00:51:47.400
Or is there some metrics?
link |
00:51:49.400
So what are the ways that Give Well conveys
link |
00:51:53.400
that this is a great charity organization?
link |
00:51:57.400
Yeah, so Give Well is looking at metrics
link |
00:52:00.400
and it's trying to compare charities ultimately
link |
00:52:03.400
in the number of lives that you can save
link |
00:52:06.400
without an equivalent benefit.
link |
00:52:08.400
So one of the charities that it recommends
link |
00:52:10.400
is Give Directly, which simply just transfers cash
link |
00:52:14.400
to the poorest families,
link |
00:52:16.400
where a poor family will get a cash transfer of $1,000.
link |
00:52:20.400
And they kind of regard that as the baseline intervention
link |
00:52:23.400
because it's so simple and people, you know,
link |
00:52:25.400
they know what to do with how to benefit themselves.
link |
00:52:29.400
That's quite powerful, by the way.
link |
00:52:31.400
So before Give Well, before the effective altruism movement,
link |
00:52:34.400
was there, I imagine there's a huge amount of corruption,
link |
00:52:38.400
funny enough, in charity organizations,
link |
00:52:41.400
or misuse of money.
link |
00:52:43.400
So there was nothing like Give Well before that?
link |
00:52:46.400
No, I mean, there were some, so I mean, the charity corruption,
link |
00:52:49.400
I mean, obviously there's some,
link |
00:52:51.400
I don't think it's a huge issue,
link |
00:52:54.400
they're also just focusing on the long things.
link |
00:52:57.400
Prior to Give Well, there were some organizations
link |
00:52:59.400
like Charity Navigator, which were more aimed
link |
00:53:02.400
at worrying about corruption and so on.
link |
00:53:04.400
So they weren't saying, these are the charities
link |
00:53:06.400
where you're going to do the most good.
link |
00:53:08.400
Instead, it was like, how good are the charity's financials?
link |
00:53:12.400
How good is its health? Are they transparent?
link |
00:53:14.400
And yeah, so that would be more useful
link |
00:53:16.400
for weeding out some of those worst charities.
link |
00:53:18.400
So Give Well is just taking this step further.
link |
00:53:21.400
Sort of in this 21st century of data,
link |
00:53:24.400
it's actually looking at the effective part.
link |
00:53:28.400
Yeah, so it's like, you know, if you know the wire cutter
link |
00:53:31.400
if you want to buy a pair of headphones,
link |
00:53:33.400
they will just look at all the headphones and be like,
link |
00:53:35.400
these are the best headphones you can buy.
link |
00:53:37.400
That's the idea with Give Well.
link |
00:53:39.400
Okay, so do you think there's a bar of what suffering is?
link |
00:53:44.400
And do you think one day we can eradicate suffering
link |
00:53:47.400
in our world amongst humans?
link |
00:53:50.400
Let's talk humans for now.
link |
00:53:52.400
Talk humans, but in general, yeah, actually.
link |
00:53:55.400
So there's a colleague of mine,
link |
00:53:59.400
kind of term abolitionism for the idea
link |
00:54:01.400
that we should just be trying to abolish suffering.
link |
00:54:03.400
And in the long run, I mean,
link |
00:54:05.400
I don't expect it anytime soon, but I think we can.
link |
00:54:08.400
I think that would require, you know,
link |
00:54:10.400
quite drastic changes to the way society is structured
link |
00:54:14.400
and perhaps even the, you know,
link |
00:54:19.400
in fact, even changes to human nature.
link |
00:54:22.400
But I do think that suffering whenever that occurs is bad
link |
00:54:25.400
and we should want it to not occur.
link |
00:54:28.400
So there's a line.
link |
00:54:31.400
There's a gray area between suffering.
link |
00:54:33.400
Now I'm Russian, so I romanticize some aspects of suffering.
link |
00:54:38.400
There's a gray line between struggle,
link |
00:54:40.400
gray area between struggle and suffering.
link |
00:54:44.400
So one, do we want to eradicate all struggle in the world?
link |
00:54:51.400
So there's an idea, you know, that the human condition
link |
00:54:59.400
inherently has suffering in it and it's a creative force.
link |
00:55:04.400
It's a struggle of our lives and we somehow grow from that.
link |
00:55:09.400
How do you think about that?
link |
00:55:13.400
I agree that's true.
link |
00:55:15.400
So, you know, often, you know, great artists can be also suffering from,
link |
00:55:21.400
you know, major health conditions or depression and so on.
link |
00:55:24.400
Or they come from abusive parents.
link |
00:55:26.400
Yeah, for example.
link |
00:55:27.400
The most great artists they think come from abusive parents.
link |
00:55:29.400
Yeah, that seems to be at least commonly the case.
link |
00:55:32.400
But I want to distinguish between suffering as being instrumentally good,
link |
00:55:37.400
you know, it causes people to produce good things
link |
00:55:40.400
and whether it's intrinsically good.
link |
00:55:42.400
And I think intrinsically it's always bad.
link |
00:55:44.400
And so if we can produce these, you know, great achievements
link |
00:55:47.400
via some other means where, you know, if we look at the scientific enterprise,
link |
00:55:53.400
we've produced incredible things.
link |
00:55:55.400
Often from people who aren't suffering have, you know,
link |
00:55:59.400
pretty good lives.
link |
00:56:00.400
They're just, they're driven instead of, you know,
link |
00:56:02.400
being pushed by a sense of anguish.
link |
00:56:04.400
They're being driven by intellectual curiosity.
link |
00:56:06.400
If we can instead produce a society where it's all carrot and no stick,
link |
00:56:11.400
that's better from my perspective.
link |
00:56:13.400
Yeah, but I'm going to have to disagree with the notion that that's possible.
link |
00:56:17.400
But I would say most of the suffering in the world is not productive.
link |
00:56:22.400
So I would dream of effective altruism curing that suffering.
link |
00:56:27.400
Yeah.
link |
00:56:28.400
But then I would say that there is some suffering that is productive
link |
00:56:31.400
that we want to keep the, because, but that's not even the focus of,
link |
00:56:36.400
because most of the suffering is just absurd.
link |
00:56:39.400
Yeah.
link |
00:56:40.400
It needs to be eliminated.
link |
00:56:42.400
So let's not even romanticize this usual notion I have,
link |
00:56:46.400
but nevertheless struggle has some kind of inherent value that to me at least.
link |
00:56:53.400
Yeah.
link |
00:56:54.400
You're right.
link |
00:56:55.400
There's some elements of human nature that also have to be modified
link |
00:56:58.400
in order to cure all suffering.
link |
00:57:00.400
Yeah.
link |
00:57:01.400
I mean, there's an interesting question of whether it's possible.
link |
00:57:03.400
So at the moment, you know, most of the time we're kind of neutral,
link |
00:57:06.400
and then we burn ourselves and that's negative and that's really good
link |
00:57:10.400
that we get that negative signal because it means we won't burn ourselves again.
link |
00:57:14.400
There's a question like, could you design agents, humans,
link |
00:57:19.400
such that you're not hovering around the zero level,
link |
00:57:22.400
you're hovering at like bliss.
link |
00:57:23.400
Yeah.
link |
00:57:24.400
And then you touch the flame and you're like, oh no,
link |
00:57:26.400
you're just slightly worse bliss.
link |
00:57:27.400
Yeah.
link |
00:57:28.400
But that's really bad compared to the bliss you are normally in.
link |
00:57:32.400
So that you can have like a gradient of bliss instead of like pain and pleasure.
link |
00:57:35.400
Well, on that point, I think it's a really important point on the experience
link |
00:57:40.400
of suffering, the relative nature of it.
link |
00:57:45.400
I mean, having grown up in the Soviet Union,
link |
00:57:48.400
we're quite poor by any measure in when I was in my childhood,
link |
00:57:57.400
but it didn't feel like you were poor because everybody around you were poor.
link |
00:58:01.400
And then in America, I feel, for the first time,
link |
00:58:06.400
beginning to feel poor because of the, there's different.
link |
00:58:11.400
There's some cultural aspects to it that really emphasize that it's good to be rich.
link |
00:58:16.400
And then there's just the notion that there is a lot of income inequality
link |
00:58:20.400
and therefore you experience that inequality.
link |
00:58:22.400
That's where suffering comes.
link |
00:58:23.400
So what do you think about the inequality of suffering
link |
00:58:27.400
that we have to think about?
link |
00:58:31.400
Do you think we have to think about that as part of effective altruism?
link |
00:58:37.400
Yeah.
link |
00:58:38.400
I think there are just things vary in terms of whether you get benefits
link |
00:58:43.400
or costs from them just in relative terms or in absolute terms.
link |
00:58:46.400
So a lot of the time, yeah, there's this hedonic treadmill
link |
00:58:49.400
where there's money is useful because it helps you buy things
link |
00:58:58.400
or good for you because it helps you buy things,
link |
00:59:00.400
but there's also a status component too.
link |
00:59:02.400
And that status component is kind of zero sum.
link |
00:59:05.400
If you were saying like in Russia, no one else felt poor
link |
00:59:10.400
because everyone around you was poor,
link |
00:59:13.400
whereas now you've got this, these other people who are super rich
link |
00:59:18.400
and maybe that makes you feel less good about yourself.
link |
00:59:24.400
There are some other things, however, which are just instantaneously good or bad.
link |
00:59:28.400
So commuting, for example, is just people hate it.
link |
00:59:33.400
It doesn't really change.
link |
00:59:34.400
Knowing that other people are commuting too doesn't make it any kind of less bad.
link |
00:59:40.400
But to push back on that for a second, I mean, yes,
link |
00:59:43.400
but also if some people are on horseback,
link |
00:59:49.400
your commute on the train might feel a lot better.
link |
00:59:52.400
There is a relative, I mean, everybody's complaining about society today,
link |
00:59:58.400
forgetting how much better it is, the better angels of our nature,
link |
01:00:04.400
how the technology is fundamentally improving most of the world's lives.
link |
01:00:09.400
And actually there's some psychological research on the well being benefits of volunteering,
link |
01:00:16.400
where people who volunteer tend to just feel happier about their lives.
link |
01:00:21.400
And one of the suggested explanations is it because it extends your reference class.
link |
01:00:25.400
So no longer you comparing yourself to the Joneses who have their slightly better car,
link |
01:00:30.400
but you realize that people are in much worse conditions than you.
link |
01:00:34.400
And so now your life doesn't seem so bad.
link |
01:00:37.400
That's actually on the psychological level.
link |
01:00:39.400
One of the fundamental benefits of effective altruism is, I mean,
link |
01:00:45.400
I guess it's the altruism part of effective altruism,
link |
01:00:48.400
is exposing yourself to the suffering in the world allows you to be more, yeah, happier
link |
01:00:56.400
and actually allows you in a sort of meditative, introspective way,
link |
01:01:01.400
realize that you don't need most of the wealth you have to be happy.
link |
01:01:07.400
Absolutely. I mean, I think effective altruism has been this huge benefit for me.
link |
01:01:11.400
And I really don't think that if I had more money that I was living on,
link |
01:01:14.400
that that would change my level of well being at all.
link |
01:01:17.400
Whereas engaging in something that I think is meaningful,
link |
01:01:21.400
that I think is steering humanity in a positive direction, that's extremely rewarding.
link |
01:01:26.400
And so, yeah, I mean, despite my best attempts at sacrifice,
link |
01:01:32.400
I think I've actually ended up happier as a result of engaging in effective altruism than I would have done.
link |
01:01:38.400
That's an interesting idea.
link |
01:01:40.400
So let's talk about animal welfare.
link |
01:01:43.400
Easy question. What is consciousness?
link |
01:01:46.400
Especially as it has to do with the capacity to suffer.
link |
01:01:50.400
I think there seems to be a connection between how conscious something is,
link |
01:01:55.400
the amount of consciousness and its ability to suffer.
link |
01:01:59.400
And that all comes into play about us thinking how much suffering there is in the world with regard to animals.
link |
01:02:05.400
So how do you think about animal welfare and consciousness?
link |
01:02:08.400
Okay. Well, consciousness, easy question.
link |
01:02:11.400
Yeah, I mean, I think we don't have a good understanding of consciousness.
link |
01:02:14.400
My best guess is it's got.
link |
01:02:16.400
And by consciousness, I'm meaning what it feels like to be you,
link |
01:02:20.400
the subjective experience that seems to be different from everything else we know about in the world.
link |
01:02:26.400
Yeah, I think it's clear, it's very poorly understood at the moment.
link |
01:02:29.400
I think it has something to do with information processing.
link |
01:02:32.400
So the fact that the brain is a computer or something like a computer.
link |
01:02:36.400
So that would mean that very advanced AI could be conscious.
link |
01:02:41.400
Information processors in general could be conscious with some suitable complexity.
link |
01:02:46.400
But that also, some suitable complexity, it's a question whether greater complexity creates some kind of greater consciousness,
link |
01:02:53.400
which relates to animals.
link |
01:02:55.400
If it's an information processing system and it's smaller and smaller,
link |
01:03:00.400
is an ant less conscious than a cow, less conscious than a monkey?
link |
01:03:06.400
Yeah, and again, this super hard question, but I think my best guess is yes.
link |
01:03:12.400
Like if I think, well, consciousness, it's not some magical thing that appears out of nowhere.
link |
01:03:17.400
It's not, you know, Descartes thought it was just comes in from this other realm
link |
01:03:21.400
and then enters through the pineal gland in your brain and that's kind of soul and it's conscious.
link |
01:03:28.400
So it's got something to do with what's going on in your brain.
link |
01:03:33.400
A chicken has one three hundredths of the size of the brain that you have.
link |
01:03:38.400
Ants, I don't know how small it is, maybe it's a millionth the size.
link |
01:03:42.400
My best guess, which I may well be wrong about because this is so hard,
link |
01:03:47.400
is that in some relevant sense, the chicken is experiencing consciousness to a lesser degree than the human
link |
01:03:54.400
and the ants significantly less again.
link |
01:03:56.400
I don't think it's as little as three hundredths as much, I think.
link |
01:04:00.400
There's evolutionary reasons for thinking that like the ability to feel pain comes on the scene relatively early on.
link |
01:04:06.400
And we have lots of our brain that's dedicated to stuff that doesn't seem to have to do anything to do with consciousness,
link |
01:04:11.400
language processing and so on.
link |
01:04:13.400
So it seems like the easy, so there's a lot of complicated questions there that we can't ask the animals about.
link |
01:04:20.400
But it seems that there's easy questions in terms of suffering, which is things like factory farming that could be addressed.
link |
01:04:29.400
Is that the lowest hanging fruit, if I may use crude terms here, of animal welfare?
link |
01:04:36.400
Absolutely, I think that's the lowest hanging fruit.
link |
01:04:38.400
So at the moment we kill, we raise and kill about 50 billion animals every year.
link |
01:04:44.400
So how many?
link |
01:04:45.400
50 billion.
link |
01:04:47.400
So for every human on the planet, several times that number are being killed.
link |
01:04:53.400
And the vast majority of them are raised in factory farms where basically whatever your view on animals,
link |
01:04:59.400
I think you should agree, even if you think, well, maybe it's not bad to kill an animal,
link |
01:05:03.400
maybe if the animal was raised in good conditions.
link |
01:05:05.400
That's just not the empirical reality.
link |
01:05:07.400
The empirical reality is that they are kept in incredible cage confinement.
link |
01:05:12.400
They are debeaked or detailed without an aesthetic.
link |
01:05:24.400
I think when a chicken gets killed, that's the best thing that happened to the chicken in the course of its life.
link |
01:05:30.400
And it's also completely unnecessary.
link |
01:05:32.400
This is in order to save a few pence for the price of meat or price of eggs.
link |
01:05:37.400
And we have indeed found it's also just inconsistent with consumer preference as well.
link |
01:05:43.400
People who buy the products, when you do surveys, are extremely against suffering in factory farms.
link |
01:05:52.400
It's just they don't appreciate how bad it is and just tend to go with easy options.
link |
01:05:57.400
And so then the best, the most effective programs I know of at the moment are nonprofits that go to companies
link |
01:06:04.400
and work with companies to get them to take a pledge to cut certain sorts of animal products,
link |
01:06:11.400
like eggs from cage confinement out of their supply chain.
link |
01:06:15.400
And it's now the case that the top 50 food retailers and fast food companies
link |
01:06:22.400
have all made these kind of cage for the pledges.
link |
01:06:25.400
And when you do the numbers, you get the conclusion that every dollar you're giving to these nonprofits,
link |
01:06:30.400
there's hundreds of chickens being spared from cage confinement.
link |
01:06:33.400
And then they're working to other types of animals, other products too.
link |
01:06:39.400
So is that the most effective way to have a ripple effect essentially?
link |
01:06:44.400
It's supposed to directly having regulation from on top that says you can't do this.
link |
01:06:51.400
So I would be more open to the regulation approach, but at least in the U.S.
link |
01:06:56.400
there's quite intense regulatory capture from the agricultural industry.
link |
01:07:01.400
And so attempts that we've seen to try and change regulation, it's been a real uphill struggle.
link |
01:07:09.400
There are some examples of ballot initiatives where the people have been able to vote in a ballot
link |
01:07:16.400
to say we want to ban eggs from cage conditions, and that's been huge, that's been really good.
link |
01:07:21.400
But beyond that, it's much more limited.
link |
01:07:24.400
So I've been really interested in the idea of hunting in general and wild animals and seeing nature
link |
01:07:32.400
as a form of cruelty that I am ethically more okay with, just from my perspective.
link |
01:07:43.400
And then I read about wild animal suffering.
link |
01:07:47.400
I'm just giving you the notion of how I felt because animal factory farming is so bad that living in the woods seemed good.
link |
01:08:00.400
And yet when you actually start to think about it, all of the animals in the animal world are living in terrible poverty.
link |
01:08:11.400
So you have all the medical conditions, all of that, I mean, they're living horrible lives that could be improved.
link |
01:08:18.400
That's a really interesting notion that I think may not even be useful to talk about because factory farming is such a big thing to focus on.
link |
01:08:26.400
But it's nevertheless an interesting notion to think of all the animals in the wild as suffering in the same way that humans in poverty are suffering.
link |
01:08:34.400
Yeah, I mean, and often even worse, so many animals are produced via our selection, so you have a very large number of children in the expectation that only small numbers survive.
link |
01:08:46.400
And so for those animals, almost all of them just live short lives where they starve to death.
link |
01:08:52.400
So yeah, there's huge amounts of suffering in nature.
link |
01:08:54.400
I don't think we should pretend that it's this kind of wonderful paradise for most animals.
link |
01:09:04.400
Yeah, their life is filled with hunger and fear and disease.
link |
01:09:10.400
I agree with you entirely that when it comes to focusing on animal welfare, we should focus on factory farming.
link |
01:09:16.400
But we also should be aware to the reality of what life for most animals is like.
link |
01:09:23.400
So let's talk about a topic I've talked a lot about, and you've actually quite eloquently talked about, which is the third priority that effective altruism considers as really important is existential risks.
link |
01:09:38.400
When you think about the existential risks that are facing our civilization, what's before us?
link |
01:09:45.400
What concerns you?
link |
01:09:46.400
What should we be thinking about, especially from an effective altruism perspective?
link |
01:09:51.400
Great, so the reason I started getting concerned about this was thinking about future generations, where the key idea is just while future people matter morally,
link |
01:10:02.400
there are vast numbers of future people.
link |
01:10:05.400
If we don't cause our own extinction, there's no reason why civilization might not last a million years.
link |
01:10:11.400
I mean, we last as long as a typical mammalian species.
link |
01:10:14.400
A billion years is when the earth is no longer habitable, or if we can take to the stars, then perhaps it's trillions of years beyond that.
link |
01:10:23.400
So the future could be very big indeed, and it seems like we're potentially very early on in civilization.
link |
01:10:29.400
Then the second idea is just, well, maybe there are things that are going to really derail that, things that actually could prevent us from having this long, wonderful civilization.
link |
01:10:37.400
And instead, could cause our own extinction, or otherwise perhaps lock ourselves into a very bad state.
link |
01:10:50.400
And what ways could that happen?
link |
01:10:53.400
Well, causing our own extinction, development of nuclear weapons in the 20th century, at least put on the table that we now had weapons that were powerful enough that you could very significantly destroy society.
link |
01:11:06.400
Perhaps an all out nuclear war would cause a nuclear winter.
link |
01:11:09.400
Perhaps that would be enough for the human race to go extinct.
link |
01:11:14.400
Why do you think we haven't done it?
link |
01:11:16.400
Sorry to interrupt.
link |
01:11:17.400
Why do you think we haven't done it yet?
link |
01:11:19.400
Is it surprising to you that having always, for the past few decades, several thousand of active ready to launch nuclear weapons warheads,
link |
01:11:33.400
and yet we have not launched them ever since the initial launch on Hiroshima and Nagasaki?
link |
01:11:43.400
I think it's a mix of luck.
link |
01:11:46.400
So I think it's definitely not inevitable that we haven't used them.
link |
01:11:49.400
So John F. Kennedy during the Cuban Missile Crisis put the estimate of nuclear exchange between the US and USSR that somewhere between one in three and even.
link |
01:11:58.400
So, you know, we really did come close.
link |
01:12:02.400
At the same time, I do think mutually assured destruction is a reason why people don't go to war.
link |
01:12:08.400
It would be, you know, why nuclear powers don't go to war.
link |
01:12:11.400
Do you think that holds, if you can link around that for a second, like my dad is a physicist amongst other things.
link |
01:12:20.400
And he believes that nuclear weapons are actually just really hard to build, which is one of the really big benefits of them currently.
link |
01:12:31.400
So that you don't have, it's very hard if you're crazy to build, to acquire a nuclear weapon.
link |
01:12:38.400
So the mutually assured destruction really works when you talk, seems to work better when it's nation states, when it's serious people, even if they're a little bit, you know, dictatorial and so on.
link |
01:12:52.400
Do you think this mutually assured destruction idea will carry, how far will it carry us in terms of different kinds of weapons?
link |
01:13:01.400
Oh, yeah, I think it's your point that nuclear weapons are very hard to build and relatively easy to control because you can control fissile material is a really important one.
link |
01:13:13.400
And future technology that's equally destructive might not have those properties.
link |
01:13:18.400
So for example, if in the future, people are able to design viruses, perhaps using a DNA printing kit that's on that, you know, one can just buy.
link |
01:13:31.400
In fact, there are companies in the process of creating home DNA printing kits.
link |
01:13:41.400
Well, then perhaps that's just totally democratized, perhaps the power to reap huge destructive potential is in the hands of most people in the world, or certainly most people with effort.
link |
01:13:53.400
And then, yeah, I no longer trust mutually assured destruction because some for some people, the idea that they would die is just not a disincentive.
link |
01:14:03.400
There was a Japanese cult, for example, Om Shinrikyo in the 90s that had, what they believed was that Armageddon was coming.
link |
01:14:12.400
If you died before Armageddon, you would get good karma, you wouldn't go to hell.
link |
01:14:19.400
If you died during Armageddon, maybe you would go to hell.
link |
01:14:23.400
And they had a biological weapons program, a chemical weapons program, when they were finally apprehended, they hadn't stocks of southern gas that were sufficient to kill 4 million people engaged in multiple terrorist acts.
link |
01:14:36.400
If they had had the ability to thinter virus at home, that would have been very scary.
link |
01:14:42.400
So it's not impossible to imagine groups of people that hold that kind of belief of death as a suicide as a good thing for passage into the next world and so on.
link |
01:14:57.400
And then connect them with some weapons, then ideology and weaponry create serious problems for us.
link |
01:15:06.400
Let me ask you a quick question. What do you think is the line between killing most humans and killing all humans?
link |
01:15:13.400
How hard is it to kill everybody? Have you thought about this?
link |
01:15:19.400
I've thought about it a bit. I think it is very hard to kill everybody.
link |
01:15:22.400
So in the case of, let's say, an all out nuclear exchange, and let's say that leads to nuclear winter, we don't really know, but it might well happen. That would, I think, result in billions of deaths.
link |
01:15:37.400
Would it kill everybody? It's quite hard to see how it would kill everybody for a few reasons.
link |
01:15:45.400
One is just, there's just so many people, seven and a half billion people. So this bad event has to kill all, almost all of them.
link |
01:15:55.400
Secondly, live in such diversity of locations. So a nuclear exchange or the virus, it has to kill people who live in the coast of New Zealand, which is going to be climatically much more stable than other areas in the world.
link |
01:16:09.400
Or people who are on submarines or who have access to bunkers. So there's a very...
link |
01:16:16.400
I'm sure there's two guys in Siberia, just bad ass. There's just human nature, somehow just perseveres.
link |
01:16:25.400
And then the second thing is just, if there's some catastrophic event, people really don't want to die.
link |
01:16:31.400
So there's going to be huge amounts of effort to ensure that it doesn't affect everyone.
link |
01:16:37.400
Have you thought about what it takes to rebuild a society with smaller, smaller numbers, like how big of a setback these kinds of things are?
link |
01:16:47.400
Yeah. So then that's something where there's a real uncertainty, I think, where at some point you just lose sufficient genetic diversity, such that you can't come back.
link |
01:16:58.400
It's unclear how small that population is, but if you've only got, say, a thousand people or fewer than a thousand, then maybe that's small enough.
link |
01:17:09.400
What about human knowledge?
link |
01:17:11.400
And then there's human knowledge. I mean, it's striking how short on geological timescales or evolutionary timescales the progress in, or how quickly the progress in human knowledge has been like agriculture, we only invented in 10,000 BC.
link |
01:17:29.400
Cities were only, you know, 3,000 BC, whereas typical animal species is half a million years to a million years.
link |
01:17:37.400
Do you think it's inevitable in some sense, the agriculture, everything that came, the industrial revolution, cars, planes, the internet, that level of innovation you think is inevitable?
link |
01:17:51.400
I think given how quickly it arose, so in the case of agriculture, I think that was dependent on climate. So it was the kind of glacial period was over, the earth warmed up a bit.
link |
01:18:07.400
That made it much more likely that humans would develop agriculture.
link |
01:18:12.400
When it comes to the industrial revolution, it's just, you know, again, only took a few thousand years from cities to industrial revolution.
link |
01:18:21.400
If we think, okay, we've gone back to this, even let's say agricultural era, but there's no reason why we wouldn't go extinct in the coming tens of thousands of years or hundreds of thousands of years.
link |
01:18:32.400
It seems just that it would be very surprising if we didn't rebound unless there's some special reason that makes things different.
link |
01:18:39.400
So perhaps we just have a much greater disease burden now. So HIV exists, it didn't exist before.
link |
01:18:48.400
And perhaps that's kind of latent in being suppressed by modern medicine and sanitation and so on, but would be a much bigger problem for some utterly destroyed society that was trying to rebound.
link |
01:19:04.400
Or there's just maybe there's something we don't know about.
link |
01:19:08.400
So another existential risk comes from the mysterious, the beautiful artificial intelligence.
link |
01:19:17.400
So what's the shape of your concerns about AI?
link |
01:19:22.400
I think there are quite a lot of concerns about AI and sometimes the different risks don't get distinguished enough.
link |
01:19:30.400
So the kind of classic worry most is closely associated with Nick Bossam and Elias Jukowski is that we at some point move from having narrow AI systems to artificial general intelligence.
link |
01:19:44.400
You get this very fast feedback effect where AI is able to build artificial intelligence helps you to build greater artificial intelligence.
link |
01:19:53.400
We have this one system that's suddenly very powerful, far more powerful than others than perhaps far more powerful than, you know, the rest of the world combined.
link |
01:20:05.400
And then secondly, it has goals that are misaligned with human goals.
link |
01:20:10.400
And so it pursues its own goals.
link |
01:20:12.400
It realizes, hey, there's this competition, namely from humans, it would be better if we eliminated them in just the same way as Homo sapiens eradicated the Neanderthals.
link |
01:20:22.400
In fact, it in fact killed off most large animals on the planet that walked the planet.
link |
01:20:30.400
So that's kind of one set of worries.
link |
01:20:33.400
I think that's not my, I think these shouldn't be dismissed as science fiction.
link |
01:20:40.400
I think it's something we should be taking very seriously.
link |
01:20:44.400
But it's not the thing you visualize when you're concerned about the biggest near term.
link |
01:20:49.400
Yeah, I think it's, I think it's like one possible scenario that would be astronomically bad.
link |
01:20:55.400
I think that other scenarios that would also be extremely bad, comparably bad, are more likely to occur.
link |
01:21:01.400
So one is just we are able to control AI.
link |
01:21:05.400
So we're able to get it to do what we want it to do.
link |
01:21:09.400
And perhaps there's not like this fast takeoff of AI capabilities within a single system, it's distributed across many systems that do somewhat different things.
link |
01:21:19.400
But you do get very rapid economic and technological progress as a result that concentrates power into the hands of a very small number of individuals, perhaps a single dictator.
link |
01:21:30.400
And secondly, that single individual is or small group of individuals or single country is then able to like lock in their values indefinitely via transmitting those values to artificial systems that have no reason to die.
link |
01:21:46.400
Like, you know, their code is copyable.
link |
01:21:49.400
Perhaps, you know, Donald Trump or Xi Jinping creates their kind of AI progeny and an image. And once you have a system that's content, once you have a society that's controlled by AI, you no longer have one of the main drivers of change
link |
01:22:06.400
historically, which is the fact that human life spans are, you know, only 100 years give or take.
link |
01:22:12.400
That's really interesting. So as opposed to sort of killing off all humans is locking in and creating a hell on earth, basically a set of principles under which the society operates that's extremely undesirable.
link |
01:22:28.400
So everybody is suffering indefinitely.
link |
01:22:30.400
Or it doesn't. I mean, it also doesn't need to be hell on earth. It could just be the wrong values. So we talked at the very beginning about how I want to see this kind of diversity of different values and exploration so that we can just work out what is kind of morally like
link |
01:22:46.400
what is good, what is bad, and then pursue the thing that's best.
link |
01:22:49.400
So actually, so the idea of wrong values is actually probably the beautiful thing is there's no such thing as right and wrong values because we don't know the right answer.
link |
01:23:01.400
We just kind of have a sense of which value is more right, which is more wrong. So any kind of lock in makes a value wrong, because it prevents exploration of this kind.
link |
01:23:12.400
Yeah. And just, you know, imagine if fascist value, you know, imagine if there was Hitler's utopia or Stalin's utopia or Donald Trump's or Xi Jinping's forever.
link |
01:23:23.400
Yeah.
link |
01:23:25.400
You know, how, how good or bad would that be compared to the best possible future we could create. And my suggestion is it really suck compared to the best possible future we could create.
link |
01:23:36.400
And you're just one individual. There's some individuals for whom Donald Trump is perhaps the best possible future.
link |
01:23:45.400
And so that's the whole point of us individuals exploring the space together.
link |
01:23:50.400
Exactly. Yeah.
link |
01:23:51.400
And what's trying to figure out which is the path that will make America great again.
link |
01:23:56.400
Yeah, exactly.
link |
01:23:57.400
So how can effective altruism help? I mean, this is a really interesting notion they actually describing of artificial intelligence being used as extremely powerful technology in the hands of very few potentially one person to create some very undesirable effect.
link |
01:24:16.400
So as opposed to AI, and again, the source of the undesirableness there is the human AI is just a really powerful tool.
link |
01:24:25.400
So whether it's that or whether AI is AI just runs away from us completely.
link |
01:24:31.400
How, as individuals, as, as people in the effective altruism movement, how can we think about something like this?
link |
01:24:40.400
Understand poverty and welfare.
link |
01:24:42.400
But this is a far out incredibly mysterious and difficult problem.
link |
01:24:47.400
Great. Well, I think there's three paths as an individual.
link |
01:24:50.400
So if you're thinking about, you know, career paths, you can pursue.
link |
01:24:55.400
So one is going down the line of technical AI safety.
link |
01:24:58.400
So this is most relevant to the kind of AI winning AI taking over scenarios where this is just technical work on current machine learning systems.
link |
01:25:11.400
Often sometimes going more theoretical to on how we can ensure that an AI is able to learn human values and able to act in the way that you want it to act.
link |
01:25:21.400
And that's a pretty mainstream issue and approach in machine learning today.
link |
01:25:27.400
So, you know, we definitely need more people doing that.
link |
01:25:31.400
Second is on the policy side of things, which I think is even more important at the moment, which is how should developments in AI be managed?
link |
01:25:40.400
On a political level, how can you ensure that the benefits of AI are very distributed?
link |
01:25:47.400
Power isn't being concentrated in the hands of a small set of individuals.
link |
01:25:55.400
How do you ensure that there aren't arms races between different AI companies that might result in them, you know, cutting corners with respect to safety?
link |
01:26:06.400
And so there the input as individuals who can have is this, we're not talking about money, we're talking about effort.
link |
01:26:13.400
We're talking about career choices.
link |
01:26:15.400
Yeah, we're talking about career choice. Yeah.
link |
01:26:17.400
But then it is the case that supposing, you know, you're like, I've already decided my career and I'm doing something quite different.
link |
01:26:23.400
You can contribute with money to where at the center for the effect of autism, we set up the long term future fund.
link |
01:26:30.400
So if you go on to effectiveautism.org, you can donate where a group of individuals will then work out what's the highest value place they can donate to work on existential risk issues with a particular focus on AI.
link |
01:26:46.400
And what's path number three?
link |
01:26:48.400
This was path number three.
link |
01:26:49.400
This is the donations with the third option I was thinking of.
link |
01:26:53.400
And then, yeah, you can also donate directly to organizations working on this like Center for Human Compatible AI at Berkeley, Future of Humanity Institute at Oxford, or other organizations too.
link |
01:27:08.400
Does AI keep you up at night?
link |
01:27:10.400
This kind of concern?
link |
01:27:12.400
Yeah, it's kind of a mix where I think it's very likely things are going to go well.
link |
01:27:19.400
I think we're going to be able to solve these problems. I think that's by far the most likely outcome, at least over the next.
link |
01:27:25.400
By far the most likely.
link |
01:27:26.400
So if you look at all the trajectories running away from our current moment in the next 100 years, you see AI creating destructive consequences as a small subset of those possible trajectories.
link |
01:27:41.400
Or at least, yeah, kind of eternal, disruptive consequences. I think that being a small subset.
link |
01:27:46.400
At the same time, it still freaks me out.
link |
01:27:48.400
I mean, when we're talking about the entire future of civilization, then small probabilities, 1% probability, that's terrifying.
link |
01:27:57.400
What do you think about Elon Musk's strong worry that we should be really concerned about existential risks of AI?
link |
01:28:05.400
Yeah, I mean, I think, broadly speaking, I think he's right.
link |
01:28:09.400
I think if we talked, we would probably have very different probabilities on how likely it is that we're doomed.
link |
01:28:16.400
But again, when it comes to talking about the entire future of civilization, it doesn't really matter if it's 1% or if it's 50%.
link |
01:28:23.400
We ought to be taking every possible safeguard we can to ensure that things go well rather than poorly.
link |
01:28:29.400
Last question. If you yourself could eradicate one problem from the world, what would that problem be?
link |
01:28:35.400
That's a great question. I don't know if I'm cheating in saying this, but I think the thing I would most want to change is just the fact that people...
link |
01:28:45.400
don't actually care about ensuring the long run future goes well.
link |
01:28:50.400
People don't really care about future generations. They don't think about it. It's not part of their aims.
link |
01:28:54.400
Well, in some sense, you're not cheating at all because in speaking the way you do and writing the things you're writing, you're addressing exactly this aspect.
link |
01:29:05.400
Exactly.
link |
01:29:06.400
That is your input into the effective altruism movement.
link |
01:29:10.400
So for that, well, thank you so much. It's an honor to talk to you. I really enjoyed it.
link |
01:29:14.400
Thanks so much for having me on.
link |
01:29:16.400
Thanks for listening to this conversation with William McCaskill and thank you to our presenting sponsor, Cash App.
link |
01:29:22.400
Please consider supporting the podcast by downloading Cash App and using code lexpodcast.
link |
01:29:28.400
If you enjoy this podcast, subscribe on YouTube, review it with 5 stars on Apple Podcast, support on Patreon, or simply connect with me on Twitter at lexfreedman.
link |
01:29:39.400
And now, let me leave you with some words from William McCaskill.
link |
01:29:44.400
One additional unit of income can do 100 times as much to benefit the extreme poor as it can to benefit you or I, earning the typical US wage of $28,000 a year.
link |
01:29:56.400
It's not often that you have two options, one of which is 100 times better than the other.
link |
01:30:01.400
Imagine a happy hour where you can either buy yourself a beer for $5 or buy someone else a beer for $0.05.
link |
01:30:09.400
If that were the case, we'd probably be pretty generous next rounds on me.
link |
01:30:14.400
But that's effectively the situation we're in all the time.
link |
01:30:18.400
It's like a 99% off sale or buy one get 99 free.
link |
01:30:23.400
It might be the most amazing deal you'll see in your life.
link |
01:30:27.400
Thank you for listening and hope to see you next time.