Anthropic is an AI safety and research company that's working to build reliable, interpretable, and steerable AI systems.| www.anthropic.com
We are no longer accepting submissions. We'll get in touch with winners and make a post about winning proposals sometime in the next month. …| www.alignmentforum.org
An informal description of ARC’s current research approach, follow-up to Eliciting Latent Knowledge| Alignment Research Center