I am the co-CEO of Open Philanthropy and co-founder of GiveWell, but all opinions are my own.| Cold Takes
For audio version, search for "Cold Takes Audio" in your podcast app| Cold Takes
I wrote ~2 years ago that it was hard to find concrete ways to help the most important century go well. That’s changing.| Cold Takes
Early signs of catastrophic risk? Yes and no.| Cold Takes
Governments could be crucial in the long run, but it's probably best to proceed with caution.| Cold Takes
For people who want to help improve our prospects for navigating transformative AI, and have an audience.| Cold Takes
Back in January, I posted a call for "beta readers" [https://www.cold-takes.com/seeking-beta-readers/]: people who read early drafts of my posts and give honest feedback. The beta readers I picked up that way are one of my favorite things about having started Cold Takes. Basically, one of my| Cold Takes
With great power comes, er, unclear responsibility and zero accountability.| Cold Takes
The activity that has been most formative for the way I think: suspending my trust in others and digging to the bottom of some claim.| Cold Takes
Looking at the evidence as comprehensively as I can.| Cold Takes
We seem to be among the earliest living beings in the galaxy.| Cold Takes
The long view of economic history says we're in the midst of a huge, unsustainable acceleration. What happens next?| Cold Takes
Why is no composer today as acclaimed as Beethoven, no author as acclaimed as Shakespeare? A data-driven look at a few possible explanations.| Cold Takes
PASTA: Process for Automating Scientific and Technological Advancement.| Cold Takes
Why would we program AI that wants to harm us? Because we might not know how to do otherwise.| Cold Takes
We, the people living in this century, have the chance to have a huge impact on huge numbers of people to come - if we can make sense of the situation enough to find helpful actions.| Cold Takes
People are far better at their jobs than at anything else. Here are the best ways to help the most important century go well.| Cold Takes
Major AI companies can increase or reduce global catastrophic risks.| Cold Takes
What the best available forecasting methods say - and why there's no "expert field" for this topic.| Cold Takes
Hypothetical stories where the world tries, but fails, to avert a global disaster.| Cold Takes
An overview of key potential factors (not just alignment risk) for whether things go well or poorly with transformative AI. https://www.cold-takes.com/transformative-ai-issues-not-just-misalignment-an-overview/| Cold Takes
Push AI forward too fast, and catastrophe could occur. Too slow, and someone else less cautious could do it. Is there a safe course?| Cold Takes
A few ways we might get very powerful AI systems to be safe.| Cold Takes
Four analogies for why "We don't see any misbehavior by this AI" isn't enough.| Cold Takes
Today's AI development methods risk training AIs to be deceptive, manipulative and ambitious. This might not be easy to fix as it comes up.| Cold Takes
The "most important century" series of blog posts argues that the 21st century could be the most important century ever for humanity, via the development of advanced AI systems that could dramatically speed up scientific and technological advancement, getting us more quickly than most people imagine to a deeply unfamiliar| Cold Takes
We scored mid-20th-century sci-fi writers on nonfiction predictions. They weren't great, but weren't terrible either. Maybe doing futurism works fine.| Cold Takes
How big a deal could AI misalignment be? About as big as it gets.| Cold Takes
An outline of how I form detailed opinions on topics: by exploring hypotheses and writing about them, not by undirected reading.| Cold Takes