AI safety research — research on ways to prevent unwanted behaviour from AI systems — generally involves working as a scientist or engineer at major AI labs, in academia, or in independent nonprofits.| 80,000 Hours
Preventing AI oligarchies: important, neglected and tractable work| adamjones.me
Anthropic is an AI safety and research company that's working to build reliable, interpretable, and steerable AI systems.| www.anthropic.com