Advanced AI technology may enable its creators, or others who control it, to attempt and achieve unprecedented societal power grabs. Under certain circumstances, they could use these systems to take control of whole economies, militaries, and governments. This kind of power grab from a single person or small group would pose a major threat to the rest of humanity.| 80,000 Hours
Why we're updating our strategic direction Since 2016, we've ranked 'risks from artificial intelligence' as our top pressing problem. Whilst we've provided research and support on how to work on reducing AI risks since that point (and before!), we've put in varying amounts of investment over time and between programmes. We think we should consolidate our effort and focus because: We think that AGI by 2030 is plausible — and this is much sooner than most of us would have predicted five years...| 80,000 Hours
In recent months, the CEOs of leading AI companies have grown increasingly confident about rapid progress: OpenAI's Sam Altman: Shifted from saying in November "the rate of progress continues" to declaring in January "we are now confident we know how to build AGI" Anthropic's Dario Amodei: Stated in January "I'm more confident than I've ever been that we're close to powerful capabilities... in the next 2-3 years" Google DeepMind's Demis Hassabis: Changed from "as soon as 10 years" in autumn t...| 80,000 Hours
Advanced AI systems could have massive impacts on humanity and potentially pose global catastrophic risks. There are opportunities...| 80,000 Hours