Scaling reinforcement learning, tracing circuits, and the path to fully autonomous agents| www.dwarkesh.com
Alignment Is Not All You Need: Other Problems in AI Safety| adamjones.me
What if artificial general intelligence (AGI) arrives in just a couple of years, triggering an explosion in science and technology that transforms life as we know it?| benjamintodd.substack.com
As the race to AGI intensifies, the national security state will get involved. The USG will wake from its slumber, and by 27/28 we’ll get some form of government AGI project. No startup can handle superintelligence. Somewhere in a SCIF, the endgame will be on. "We must be curious to learn how such a set| SITUATIONAL AWARENESS
Superintelligence will give a decisive economic and military advantage. China isn’t at all out of the game yet. In the race to AGI, the free world’s very survival will be at stake. Can we maintain our preeminence over the authoritarian powers? And will we manage to avoid self-destruction along the way? The story of the| SITUATIONAL AWARENESS
Reliably controlling AI systems much smarter than we are is an unsolved technical problem. And while it is a solvable problem, things could very easily go off the rails during a rapid intelligence explosion. Managing this will be extremely tense; failure could easily be catastrophic. The old sorcererHas finally gone away!Now the spirits he controlsShall| SITUATIONAL AWARENESS
tl;dr Brooks' hypothesis in "The Mythical Man Month" is that the nature of software development means that no further "order of magnitud...| Ian Cooper - Staccato Signals
The nation’s leading AI labs treat security as an afterthought. Currently, they’re basically handing the key secrets for AGI to the CCP on a silver platter. Securing the AGI secrets and weights against the state-actor threat will be an immense effort, and we’re not on track. They met in the evening in Wigner’s office. “Szilard| SITUATIONAL AWARENESS
New AI research is being published every day and compounding at an unprecedented pace. How can one filter signal from noise? Are we fast approaching a plateau?| nextbigteng.substack.com
AI's rapid advancement is caused by three factors: raw compute increases, algorithmic efficiency improvements, and "unhobbling" processes. In the current super-accelerated decade, we enjoy a 10,000x AI performance boost every four years, making it likely that AI will exceed top human experts by 2027. There are good arguments that AI progress past 2027 could be slower, but also that it could be even faster.| jakobnielsenphd.substack.com
AI progress won’t stop at human-level. Hundreds of millions of AGIs could automate AI research, compressing a decade of algorithmic progress (5+ OOMs) into ≤1 year. We would rapidly go from human-level to vastly superhuman AI systems. The power—and the peril—of superintelligence would be dramatic. Let an ultraintelligent machine be defined as a machine that| SITUATIONAL AWARENESS