Several arguments support the heuristic that we should help groups holding different value systems from our own when doing so is cheap, unless those groups prove uncooperative to our values. This is true even if we don't directly care at all about other groups' value systems. Exactly how nice to be depends on the particulars of the situation.| Center on Long-Term Risk
When agents of differing values compete, they may often find it mutually advantageous to compromise rather than continuing to engage in zero-sum conflicts. Potential ways of encouraging cooperation include promoting democracy, tolerance and (moral) trade. Because a future without compromise could be many times worse than a future with it, advancing compromise seems an important undertaking.| Center on Long-Term Risk
Will we go extinct, or will we succeed in building a flourishing utopia? Discussions about the future trajectory of humanity often center around these two possibilities, which tends to ignore that survival does not always imply utopian outcomes, or that outcomes where humans go extinct could differ tremendously in how much suffering they contain.| Center on Long-Term Risk
Space colonization would likely increase rather than decrease total suffering. Because many people care nonetheless about humanity’s spread into the cosmos, we should reduce risks of astronomical future suffering without opposing others’ spacefaring dreams. In general, we recommend to focus on making sure that an intergalactic future will be good if it happens rather than making sure there will be such a future.| Center on Long-Term Risk
The number of wild animals vastly exceeds that of animals on factory farms. Therefore, animal advocates should consider focusing their efforts to raise concern about the suffering that occurs in nature. In theory, engineering more humane ecological systems might be valuable. In practice, however, it seems more effective to promote the meme of caring about wild animals to other activists, academics and other sympathetic groups.| Center on Long-Term Risk
This post is based on notes for a talk I gave at EAG Boston 2017. I talk about risks of severe suffering in the far future, or s-risks. Reducing these risks is the main focus of the Foundational Research Institute, the EA research group that I represent.| Center on Long-Term Risk