(This post is based on an overview talk I gave at UCL EA and Oxford AI society (recording here). Cross-posted to the Alignment Forum. Thanks to Janos Kramar for detailed feedback on this post and t…| Victoria Krakovna
Labs should make a plan for aligning powerful systems they create, and they should publish it to elicit feedback, inform others’ plans and research (especially other labs and external alignment researchers who can support or complement their plan), and help them notice and respond to information when their plan needs to change. They should omit dangerous details if those exist. As their understanding of AI risk and safety techniques improves, they should update the plan. Sharing also enable...| ailabwatch.org
AI safety research — research on ways to prevent unwanted behaviour from AI systems — generally involves working as a scientist or engineer at major AI labs, in academia, or in independent nonprofits.| 80,000 Hours
DigiChina Editor’s Note: This is a guest translation organized by Concordia AI. It was edited by Kwan Yee Ng and Jason Zhou, with contributions from Ben Murphy, Rogier Creemers, and Hunter Dorwart. This translation has not been edited by DigiChina for accuracy or house style. For context and analysis on this unofficial scholars’ draft, please […]| DigiChina
Machine Learning ("ML") wird als Wundermittel angepriesen um die Menschheit von fast allen repetitiven Verarbeitungsaufgaben zu entlasten: Von der| Das Netz ist politisch