By default powerful ML systems will have dangerous capabilities (such as hacking) and may not do what their operators want. Frontier AI labs should design and modify their systems to be less dangerous and more controllable. In particular, labs should:| ailabwatch.org
AI safety research — research on ways to prevent unwanted behaviour from AI systems — generally involves working as a scientist or engineer at major AI labs, in academia, or in independent nonprofits.| 80,000 Hours