It’s plausible that there will soon be digital minds that are sentient and deserving of rights. This raises several important issues that we don’t know how to deal with. It seems tractable both to make progress in understanding these issues and in implementing policies that reflect this understanding. A favorite direction is to take existing ideas for what labs could be doing and spell out enough detail to make them easy to implement.| Forethought
The shift from scaling up the pre-training compute of AI systems to scaling up their inference compute may have profound effects on AI governance. The nature of these effects depends crucially on whether this new inference compute will primarily be used during external deployment or as part of a more complex training programme within the lab. Rapid scaling of inference-at-deployment would: lower the importance of open-weight models (and of securing the weights of closed models), reduce the im...| Forethought
There have been recent discussions of centralizing western AGI development, for instance through a Manhattan Project for AI. But there has been little analysis of whether centralizing would actually be a good idea. In this piece, we explore the strategic implications of having one project instead of several. We think that it’s very unclear whether centralizing would be good or bad overall. We tentatively guess that centralizing would be bad because it would increase risks from power concent...| Forethought
If there is an international project to build artificial general intelligence (“AGI”), how should it be designed? Existing scholarship has looked to historical models for inspiration, often suggesting the Manhattan Project or CERN as the closest analogues. But AGI is a fundamentally general-purpose technology, and is likely to be used primarily for commercial purposes rather than military or scientific ones. This report presents an under-discussed alternative: Intelsat, an international o...| Forethought