Nate Soares argues that one of the core problems with AI alignment is that an AI system's capabilities will likely generalize to new domains much fas…| www.lesswrong.com
I sent a two-question survey to ~117 people working on long-term AI risk, asking about the level of existential risk from "humanity not doing enough…| forum.effectivealtruism.org
A guide for making the future go better. Humanity’s written history spans only five thousand years. Our yet-unwritten future could last for millions more - or it could end tomorrow. Staggering numbers of people will lead lives of flourishing or misery or never live at all, depending on what we do today.| What We Owe the Future
Sixteen weaknesses in the classic argument for AI risk| worldspiritsockpuppet.substack.com
Overview What We Owe The Future (WWOTF) by Will MacAskill has recently been released with much fanfare. While I strongly agree that future people matter morally and we should act based on this, I think the book isn’t clear enough about MacAskill’s views on longtermist priorities, and to| Foxy Scout