I’m writing a new guide to careers to help AGI go well. Here's a summary of the key messages as they stand.| benjamintodd.substack.com
I'm writing a new guide to careers to help artificial general intelligence (AGI) go well. Here's a summary of the bottom lines that'll be in the guide as it stands. Stay tuned to hear our full reasoning and updates as our views evolve. In short: The chance of an AGI-driven technological explosion before 2030 — creating one of the most pivotal periods in history — is high enough to act on.| 80,000 Hours
This article explains key concepts that come up in the context of AI alignment. These terms are only attempts at gesturing at the underlying ideas, and the ideas are what is important. There is no strict consensus on which name should correspond to which idea, and different people use the terms differently.[[1]] This article explains […]| BlueDot Impact
Texts on this and that.| Erich Grunewald's Blog
People are far better at their jobs than at anything else. Here are the best ways to help the most important century go well.| Cold Takes
Major AI companies can increase or reduce global catastrophic risks.| Cold Takes
Hypothetical stories where the world tries, but fails, to avert a global disaster.| Cold Takes
An overview of key potential factors (not just alignment risk) for whether things go well or poorly with transformative AI. https://www.cold-takes.com/transformative-ai-issues-not-just-misalignment-an-overview/| Cold Takes
Push AI forward too fast, and catastrophe could occur. Too slow, and someone else less cautious could do it. Is there a safe course?| Cold Takes
A few ways we might get very powerful AI systems to be safe.| Cold Takes
Four analogies for why "We don't see any misbehavior by this AI" isn't enough.| Cold Takes