Suppose that we use the universal prior for sequence prediction, without regard for computational complexity. I think that the result is going to be really weird, and that most people don’t a…| Ordinary Ideas
Nate Soares argues that one of the core problems with AI alignment is that an AI system's capabilities will likely generalize to new domains much fas…| www.lesswrong.com
Effective Altruism Global conferences connect you with experts and peers to collaborate on impactful projects and tackle global challenges.| www.effectivealtruism.org
The following is an edited transcript of a talk I gave. I have given this talk at multiple places, including first at Anthropic and then for ELK winn…| www.alignmentforum.org
YouTube link| AXRP - the AI X-risk Research Podcast