Labs should do research to make AI systems safer, more interpretable, and more controllable, and they should publish that research.| ailabwatch.org
Labs should make a plan for aligning powerful systems they create, and they should publish it to elicit feedback, inform others’ plans and research (especially other labs and external alignment researchers who can support or complement their plan), and help them notice and respond to information when their plan needs to change. They should omit dangerous details if those exist. As their understanding of AI risk and safety techniques improves, they should update the plan. Sharing also enable...| ailabwatch.org
Anthropic is an AI safety and research company that's working to build reliable, interpretable, and steerable AI systems.| www.anthropic.com
DeepMind brought artificial intelligence mainstream. Now its CEO Demis Hassabis is issuing a warning| TIME
I had a lot of fun chatting with Shane Legg - Founder and Chief AGI Scientist, Google DeepMind!| www.dwarkeshpatel.com