We’ve released a paper, AI Control: Improving Safety Despite Intentional Subversion. This paper explores techniques that prevent AI catastrophes even…| www.lesswrong.com
People are far better at their jobs than at anything else. Here are the best ways to help the most important century go well.| Cold Takes
Major AI companies can increase or reduce global catastrophic risks.| Cold Takes
Hypothetical stories where the world tries, but fails, to avert a global disaster.| Cold Takes
An overview of key potential factors (not just alignment risk) for whether things go well or poorly with transformative AI. https://www.cold-takes.com/transformative-ai-issues-not-just-misalignment-an-overview/| Cold Takes
Push AI forward too fast, and catastrophe could occur. Too slow, and someone else less cautious could do it. Is there a safe course?| Cold Takes
A few ways we might get very powerful AI systems to be safe.| Cold Takes