Reliably controlling AI systems much smarter than we are is an unsolved technical problem. And while it is a solvable problem, things could very easily go off the rails during a rapid intelligence explosion. Managing this will be extremely tense; failure could easily be catastrophic. The old sorcererHas finally gone away!Now the spirits he controlsShall| SITUATIONAL AWARENESS
AGI will effectively be the most powerful weapon man has ever created. Neither “lockdown forever” nor “let ‘er rip” is a productive response; we can chart a smarter path.| FOR OUR POSTERITY
Society really cares about safety. Practically speaking, the binding constraint on deploying your AGI could well be your ability to align your AGI. Solving (scalable) alignment might be worth lots of $$$ and key to beating China.| FOR OUR POSTERITY