The AI safety community has grown rapidly since the ChatGPT wake-up call, but available funding doesn’t seem to have kept pace.| Benjamin Todd
If transformative AI might come soon and you want to help that go well, one strategy you might adopt is building something that will improve as AI gets more capable.| Benjamin Todd
This episode explores the groundbreaking advancements in AGI from recent releases of two Chinese reasoning models: DeepSeek's R1 and Moonshot AI's Kimmy. Watch Episode Here Read Episode Description This episode explores the groundbreaking advancements in AGI from recent releases of two Chinese reasoning models: DeepSeek's R1 and Moonshot AI's Kimmy.| The Cognitive Revolution
Humanity is not prepared for the AI-driven challenges we face. But the right AI tools could help us to anticipate and work together to meet these challenges — if they’re available in time. We can and should accelerate these tools. Key applications include (1) *epistemic* tools, which improve human judgement; (2) *coordination* tools, which help diverse groups work identify and work towards shared goals; (3) *risk-targeted* tools to address specific challenges. We can accelerate important ...| Forethought
AI that can accelerate research could drive a century of technological progress over just a few years. During such a period, new technological or political developments will raise consequential and hard-to-reverse decisions, in rapid succession. We call these developments *grand challenges*. These challenges include new weapons of mass destruction, AI-enabled autocracies, races to grab offworld resources, and digital beings worthy of moral consideration, as well as opportunities to dramatical...| Forethought
For people who want to help improve our prospects for navigating transformative AI, and have an audience.| Cold Takes
Progress in pretrained language model performance outpaces expectations, occurring at a pace equivalent to doubling computational power every 5 to 14 months.| Epoch AI
Career aptitude tests and gap years don't work. We tell you what does.| 80,000 Hours
Any college graduate in a rich country can do a huge amount to improve lives.| 80,000 Hours
In recent months, the CEOs of leading AI companies have grown increasingly confident about rapid progress: OpenAI's Sam Altman: Shifted from saying in November "the rate of progress continues" to declaring in January "we are now confident we know how to build AGI" Anthropic's Dario Amodei: Stated in January "I'm more confident than I've ever been that we're close to powerful capabilities... in the next 2-3 years" Google DeepMind's Demis Hassabis: Changed from "as soon as 10 years" in autumn t...| 80,000 Hours
Humanity’s long-run future could lie in space — it could go well, but that’s not guaranteed. What can you do to help shape the future of space governance?| 80,000 Hours
Pro & con lists make it easy to overweight an unimportant factor. Here's a more robust method.| 80,000 Hours
Superintelligence will give a decisive economic and military advantage. China isn’t at all out of the game yet. In the race to AGI, the free world’s very survival will be at stake. Can we maintain our preeminence over the authoritarian powers? And will we manage to avoid self-destruction along the way? The story of the| SITUATIONAL AWARENESS
AI safety research — research on ways to prevent unwanted behaviour from AI systems — generally involves working as a scientist or engineer at major AI labs, in academia, or in independent nonprofits.| 80,000 Hours
The nation’s leading AI labs treat security as an afterthought. Currently, they’re basically handing the key secrets for AGI to the CCP on a silver platter. Securing the AGI secrets and weights against the state-actor threat will be an immense effort, and we’re not on track. They met in the evening in Wigner’s office. “Szilard| SITUATIONAL AWARENESS
AI progress won’t stop at human-level. Hundreds of millions of AGIs could automate AI research, compressing a decade of algorithmic progress (5+ OOMs) into ≤1 year. We would rapidly go from human-level to vastly superhuman AI systems. The power—and the peril—of superintelligence would be dramatic. Let an ultraintelligent machine be defined as a machine that| SITUATIONAL AWARENESS
Many people take jobs early in their career that leave them stranded later on. Why does this happen and how can you avoid it?| 80,000 Hours
Improving China-Western coordination on global catastrophic risks| 80,000 Hours
Advanced AI systems could have massive impacts on humanity and potentially pose global catastrophic risks. There are opportunities...| 80,000 Hours
Are we prepared for the next pandemic? Pandemics — and biological risks like bioterrorism or biological weapons — pose an existential threat to humanity.| 80,000 Hours
Which problems are the biggest, most tractable, and most neglected in the world - and what can you do about them?| 80,000 Hours
Become a founder of an organisation tackling one of the world’s most pressing problems.| 80,000 Hours
Some people have skills that are better suited to earning money than the other strategies. These people can take a higher earning career and donate the money to effective organisations.| 80,000 Hours
Get free 1:1 career advice from one of our advisors. We can help you choose your focus, make connections, and find a fulfilling job.| 80,000 Hours
People are far better at their jobs than at anything else. Here are the best ways to help the most important century go well.| Cold Takes
Why do we think that reducing risks from AI is one of the most pressing issues of our time? There are technical safety issues that we believe could, in the worst case, lead to an existential threat to humanity.| 80,000 Hours
Organisations with influence, financial power, and advanced technology are targeted by actors seeking to steal or abuse these assets. A career in information security is a promising avenue to support high-impact organisations by protecting against these attacks, which have the potential to disrupt an organisation's mission or even increase existential risk.| 80,000 Hours
Today's AI development methods risk training AIs to be deceptive, manipulative and ambitious. This might not be easy to fix as it comes up.| Cold Takes
How big a deal could AI misalignment be? About as big as it gets.| Cold Takes