Login
From:
80,000 Hours
(Uncensored)
subscribe
Nathan Labenz on the final push for AGI, understanding OpenAI's leadership drama, and red-teaming frontier models
https://80000hours.org/podcast/episodes/nathan-labenz-openai-red-team-safety/
links
backlinks
Tagged with:
machine learning
openai
existential risk
long-term ai policy
institutional decision making
OpenAI says its mission is to build AGI — an AI system that is better than human beings at everything. Should the world trust them to do this safely?
Roast topics
Find topics
Roast it!
Roast topics
Find topics
Find it!
Roast topics
Find topics
Find it!