Bioethics Blogs

Moral Agreement on Saving the World

There appears to be lot of disagreement in moral philosophy.  Whether these many apparent disagreements are deep and irresolvable, I believe there is at least one thing it is reasonable to agree on right now, whatever general moral view we adopt:  that it is very important to reduce the risk that all intelligent beings on this planet are eliminated by an enormous catastrophe, such as a nuclear war.  How we might in fact try to reduce such existential risks is discussed elsewhere.  My claim here is only that we – whether we’re consequentialists, deontologists, or virtue ethicists – should all agree that we should try to save the world.

According to consequentialism, we should maximize the good, where this is taken to be the goodness, from an impartial perspective, of outcomes.  Clearly one thing that makes an outcome good is that the people in it are doing well.  There is little disagreement here.  If the happiness or well-being of possible future people is just as important as that of people who already exist, and if they would have good lives, it is not hard to see how reducing existential risk is easily the most important thing in the whole world.  This is for the familiar reason that there are so many people who could exist in the future – there are trillions upon trillions… upon trillions.

There are so many possible future people that reducing existential risk is arguably the most important thing in the world, even if the well-being of these possible people were given only 0.001% as much weight as that of existing people.  Even on a wholly person-affecting view – according to which there’s nothing (apart from effects on existing people) to be said in favor of creating happy people – the case for reducing existential risk is very strong.

The views, opinions and positions expressed by these authors and blogs are theirs and do not necessarily represent that of the Bioethics Research Library and Kennedy Institute of Ethics or Georgetown University.