This is a cross-post of one of our problem profiles. We’re currently posting some of our all-time best content to Substack. Read more.| 80000hours.substack.com
We’re exploring the frontiers of AGI, prioritizing technical safety, proactive risk assessment, and collaboration with the AI community.| Google DeepMind Blog
The award recognizes their work developing AlphaFold, a groundbreaking AI system that predicts the 3D structure of proteins from their amino acid sequences.| Google DeepMind Blog
Most AI safety conversations centre on alignment: ensuring AI systems share our values and goals. But despite progress, we’re unlikely to know we’ve solved the problem before the arrival of human-level and superhuman systems in as little as three years.| 80,000 Hours
Save hours of work with Gemini Deep Research as your personal research assistant from Google.| Gemini
It’s easy to dismiss alarming AI-related predictions when you don’t know where the numbers came from.| 80,000 Hours
In the decade that I have been working on AI, I’ve watched it grow from a tiny academic field to arguably the most important economic and geopolitical issue in the world. In all that time, perhaps the most important lesson I’ve learned is this: the progress of the underlying technology is inexorable, driven by forces too powerful to stop, but the way in which it happens—the order in which things are built, the applications we choose, and the details of how it is rolled out to society...| www.darioamodei.com
Once AI systems can design and build even more capable AI systems, we could see an *intelligence explosion*, where AI capabilities rapidly increase to well past human performance. The classic intelligence explosion scenario involves a feedback loop where AI improves AI software. But AI could also improve other inputs to AI development. This paper analyses three feedback loops in AI development: software, chip technology, and chip production. These could drive three types of intelligence explo...| Forethought
Advanced AI technology may enable its creators, or others who control it, to attempt and achieve unprecedented societal power grabs. Under certain circumstances, they could use these systems to take control of whole economies, militaries, and governments. This kind of power grab from a single person or small group would pose a major threat to the rest of humanity.| 80,000 Hours
In recent months, the CEOs of leading AI companies have grown increasingly confident about rapid progress: OpenAI's Sam Altman: Shifted from saying in November "the rate of progress continues" to declaring in January "we are now confident we know how to build AGI" Anthropic's Dario Amodei: Stated in January "I'm more confident than I've ever been that we're close to powerful capabilities... in the next 2-3 years" Google DeepMind's Demis Hassabis: Changed from "as soon as 10 years" in autumn t...| 80,000 Hours
I'm writing a new guide to careers to help artificial general intelligence (AGI) go well. Here's a summary of the bottom lines that'll be in the guide as it stands. Stay tuned to hear our full reasoning and updates as our views evolve. In short: The chance of an AGI-driven technological explosion before 2030 — creating one of the most pivotal periods in history — is high enough to act on.| 80,000 Hours
We can be the generation that helps cause the end of everything, or navigates humanity through its most dangerous period.| 80,000 Hours
The second birthday of ChatGPT was only a little over a month ago, and now we have transitioned into the next paradigm of models that can do complex reasoning. New years get people in a reflective...| Sam Altman
A paper from Anthropic's Alignment Science team on Alignment Faking in AI large language models| www.anthropic.com
The controversial CEO spoke with Edward Felsenthal for the TIME100 Most Influential Companies issue| TIME
AI safety research — research on ways to prevent unwanted behaviour from AI systems — generally involves working as a scientist or engineer at major AI labs, in academia, or in independent nonprofits.| 80,000 Hours
Our approach to analyzing and mitigating future risks posed by advanced AI models| Google DeepMind
Anthropic is an AI safety and research company that's working to build reliable, interpretable, and steerable AI systems.| www.anthropic.com
This is the second part of our marathon interview with Carl Shulman. The first episode is on the economy and national security after AGI. You can listen to them in either order!| 80,000 Hours
Our state-of-the-art model delivers 10-day weather predictions at unprecedented accuracy in under one minute| Google DeepMind
Advanced AI systems could have massive impacts on humanity and potentially pose global catastrophic risks. There are opportunities...| 80,000 Hours
We argue that operations management is among the highest-impact roles in the effective altruism and existential risk communities right now, and address common misconceptions about the roles.| 80,000 Hours
Bing’s acting unhinged, and lots of people love it.| The Verge
The 2023 Expert Survey on Progress in AI is out, this time with 2778 participants from six top AI venues (up from about 700 and two in the 2022 ESPAI), making it probably the biggest ever survey of AI researchers.| blog.aiimpacts.org
Why are billions of dollars being poured into artificial intelligence R&D this year? Companies certainly expect to get a return on their investment. Arguably, the main reason AI is profitable i…| AI Optimism
To have a big social impact with your career, you’ll want to work on the most pressing problems. This sounds obvious, but people usually fail to put this idea into practice.| 80,000 Hours
Are we prepared for the next pandemic? Pandemics — and biological risks like bioterrorism or biological weapons — pose an existential threat to humanity.| 80,000 Hours
Which problems are the biggest, most tractable, and most neglected in the world - and what can you do about them?| 80,000 Hours
Become a founder of an organisation tackling one of the world’s most pressing problems.| 80,000 Hours
Some people have skills that are better suited to earning money than the other strategies. These people can take a higher earning career and donate the money to effective organisations.| 80,000 Hours
The course of the future is uncertain. But humanity’s choices now can shape how events unfold.| 80,000 Hours
Why would we program AI that wants to harm us? Because we might not know how to do otherwise.| Cold Takes
Microsoft's new AI-powered Bing is threatening users and acting erratically. It's a sign of worse to come| TIME
Organisations with influence, financial power, and advanced technology are targeted by actors seeking to steal or abuse these assets. A career in information security is a promising avenue to support high-impact organisations by protecting against these attacks, which have the potential to disrupt an organisation's mission or even increase existential risk.| 80,000 Hours
How big a deal could AI misalignment be? About as big as it gets.| Cold Takes