I’m writing a new guide to careers to help AGI go well. Here's a summary of the key messages as they stand.| benjamintodd.substack.com
I'm writing a new guide to careers to help artificial general intelligence (AGI) go well. Here's a summary of the bottom lines that'll be in the guide as it stands. Stay tuned to hear our full reasoning and updates as our views evolve. In short: The chance of an AGI-driven technological explosion before 2030 — creating one of the most pivotal periods in history — is high enough to act on.| 80,000 Hours
Socialism is the most effective altruism. Who needs anything else? The repugnant philosophy of “Effective Altruism” offers nothing to movements for global justice.| www.currentaffairs.org
Texts on this and that.| Erich Grunewald's Blog
Texts on this and that.| Erich Grunewald's Blog
The internet's best blog!| nintil.com
love for Wave • why leave • where to • why there • what’s next| benkuhn.net
Texts on this and that.| Erich Grunewald's Blog
If we can accurately recognize good performance on alignment, we could elicit lots of useful alignment work from our models, even if they're playing the training game.| Planned Obsolescence
Effective altruism is a project that aims to find the best ways to help others, and put them into practice. It’s partly a research field, which aims to identify the world’s most pressing problems and| www.effectivealtruism.org
People are far better at their jobs than at anything else. Here are the best ways to help the most important century go well.| Cold Takes
Major AI companies can increase or reduce global catastrophic risks.| Cold Takes
Hypothetical stories where the world tries, but fails, to avert a global disaster.| Cold Takes
An overview of key potential factors (not just alignment risk) for whether things go well or poorly with transformative AI. https://www.cold-takes.com/transformative-ai-issues-not-just-misalignment-an-overview/| Cold Takes
Push AI forward too fast, and catastrophe could occur. Too slow, and someone else less cautious could do it. Is there a safe course?| Cold Takes
A few ways we might get very powerful AI systems to be safe.| Cold Takes
Four analogies for why "We don't see any misbehavior by this AI" isn't enough.| Cold Takes
Today's AI development methods risk training AIs to be deceptive, manipulative and ambitious. This might not be easy to fix as it comes up.| Cold Takes
The internet's best blog!| nintil.com