Exploring how the leading AI country could achieve economic dominance through superexponential growth dynamics. Analysis of trade, technological diffusion, and space resource scenarios that could enable one nation to control >99% of global output post-AGI.| Forethought
Threat modeling is really just a fancy way of saying: “Let’s think about what could go wrong with our software in advance, so we can stop it before it happens.” When we build applications, most of …| SheHacksPurple
AI progress might enable either an AI system or a human with AI assistance to seize power. Which would be worse? In this research note, I present some initial considerations for comparing AI takeover with human takeover. I argue that AI systems will be kinder and more cooperative than humans in expectation, and that conditioning on takeover makes AI takeover more concerning, but by less than you might think. Overall, it’s plausible that human takeover would be worse than AI takeover.| Forethought
This article contains two sections. (1) Backup plans for misaligned AI: If we can't build aligned AI, and if we fail to coordinate well enough to avoid putting misaligned AI systems in positions of power, we might have some strong preferences about the dispositions of those misaligned AI systems. This section is about nudging those into somewhat better dispositions (in worlds where we can't align AI systems well enough to stay in control). A favorite direction is to study generalization & AI...| Forethought
Saturday April 26th 2025 through to Friday May 2nd I attended RSAC and B-Sides San Francisco, and it was amazing! Let me tell you about my trip!| SheHacksPurple
by Kieron Ivy Turk, Anna Talas, and Alice Hutchings| Light Blue Touchpaper
I show how a standard argument for advancing progress is extremely sensitive to how humanity’s story eventually ends. Whether advancing progress is ultimately good or bad depends crucially on whether it also advances the end of humanity. Because we know so little about the answer to this crucial question, the case for advancing progress is undermined. I suggest we must either overcome this objection through improving our understanding of these connections between progress and human extincti...| Forethought
There have been recent discussions of centralizing western AGI development, for instance through a Manhattan Project for AI. But there has been little analysis of whether centralizing would actually be a good idea. In this piece, we explore the strategic implications of having one project instead of several. We think that it’s very unclear whether centralizing would be good or bad overall. We tentatively guess that centralizing would be bad because it would increase risks from power concent...| Forethought
The long-term future of intelligent life is currently unpredictable and undetermined. We argue that the invention of artificial general intelligence (AGI) could change this by making extreme types of lock-in technologically feasible. In particular, we argue that AGI would make it technologically feasible to (i) perfectly preserve nuanced specifications of a wide variety of values or goals far into the future, and (ii) develop AGI-based institutions that would (with high probability) competent...| Forethought
In today’s fast-paced development environments, security cannot be an afterthought. It needs to be baked into the design process from the outset. This is| The Serverless Edge