AI companies are increasingly using AI systems to accelerate AI research and development. Today’s AI systems help researchers write code, analyze research papers, and generate training data. Future systems could be significantly more capable – potentially automating the entire AI development cycle from formulating research questions and designing experiments to implementing, testing, and refining new AI systems. We argue that such systems could trigger a runaway feedback loop in which the...| Forethought
There have been recent discussions of centralizing western AGI development, for instance through a Manhattan Project for AI. But there has been little analysis of whether centralizing would actually be a good idea. In this piece, we explore the strategic implications of having one project instead of several. We think that it’s very unclear whether centralizing would be good or bad overall. We tentatively guess that centralizing would be bad because it would increase risks from power concent...| Forethought
Once AI systems can design and build even more capable AI systems, we could see an *intelligence explosion*, where AI capabilities rapidly increase to well past human performance. The classic intelligence explosion scenario involves a feedback loop where AI improves AI software. But AI could also improve other inputs to AI development. This paper analyses three feedback loops in AI development: software, chip technology, and chip production. These could drive three types of intelligence explo...| Forethought
The long-term future of intelligent life is currently unpredictable and undetermined. We argue that the invention of artificial general intelligence (AGI) could change this by making extreme types of lock-in technologically feasible. In particular, we argue that AGI would make it technologically feasible to (i) perfectly preserve nuanced specifications of a wide variety of values or goals far into the future, and (ii) develop AGI-based institutions that would (with high probability) competent...| Forethought
The development of AI that is more broadly capable than humans will create a new and serious threat: *AI-enabled coups*. An AI-enabled coup could be staged by a very small group, or just a single person, and could occur even in established democracies. Sufficiently advanced AI will introduce three novel dynamics that significantly increase coup risk. Firstly, military and government leaders could fully replace human personnel with AI systems that are *singularly loyal* to them, eliminating th...| Forethought
The Model Spec specifies desired behavior for the models underlying OpenAI's products (including our APIs).| model-spec.openai.com
Some notes on robot economics.| benjamintodd.substack.com
We investigate four constraints to scaling AI training: power, chip manufacturing, data, and latency. We predict 2e29 FLOP runs will be feasible by 2030.| Epoch AI
We characterize techniques that induce a tradeoff between spending resources on training and inference, outlining their implications for AI governance.| Epoch AI
The most extraordinary techno-capital acceleration has been set in motion. As AI revenue grows rapidly, many trillions of dollars will go into GPU, datacenter, and power buildout before the end of the decade. The industrial mobilization, including growing US electricity production by 10s of percent, will be intense. You see, I told you it couldn’t be| SITUATIONAL AWARENESS
A refreshed, more powerful Claude 3.5 Sonnet, Claude 3.5 Haiku, and a new experimental AI capability: computer use.| www.anthropic.com
This is Part 0 of a four-part report — see links to Part 1. Part 2. Part 3, and a folder with more materials. Abstract In the next few decades we may develop AI that can automate ~all cognitive tasks and dramatically transform the world. By contrast, today the capabilities and impact of AI are much […]| Open Philanthropy
AI progress won’t stop at human-level. Hundreds of millions of AGIs could automate AI research, compressing a decade of algorithmic progress (5+ OOMs) into ≤1 year. We would rapidly go from human-level to vastly superhuman AI systems. The power—and the peril—of superintelligence would be dramatic. Let an ultraintelligent machine be defined as a machine that| SITUATIONAL AWARENESS
Introducing Claude 3.5 Sonnet—our most intelligent model yet. Sonnet now outperforms competitor models and Claude 3 Opus on key evaluations, at twice the speed.| www.anthropic.com
Anthropic is an AI safety and research company that's working to build reliable, interpretable, and steerable AI systems.| www.anthropic.com
Once a lab trains AI that can fully replace its human employees, it will be able to multiply its workforce 100,000x. If these AIs do AI research, they could develop vastly superhuman systems in under a year.| Planned Obsolescence