China will likely match U.S. AI model capabilities this year, triggering inevitable concerns about America's technological edge. However, this snapshot comparison misses the bigger picture.| Blog - Lennart Heim
Some claim that if the US controls AI chips, countries will immediately turn to China to backfill. Critics rightly identify this as a crucial consideration—but I don't think it's an immediate and strong threat.| Blog - Lennart Heim
Huawei's next AI accelerator—the Ascend 910C—is entering production. It's China's best AI chip.| Blog - Lennart Heim
In a new perspective, I explain and analyze the AI Diffusion Framework—what it does, how it works, its rationale, why it was needed, why China can't easily fill the void, and some thoughts on model weight controls.| Blog - Lennart Heim
TLDR: timing is political but tech is real, compute constraints bite differently than you think, and the story is more complex than "export controls failed."| Blog - Lennart Heim
AI benefit sharing covers a spectrum of options, from participating in the semiconductor supply chain to accessing end products of AI research, with intermediate stages like providing structured access to AI models.| Blog - Lennart Heim
GPT-o1 demonstrates how leveraging more compute at inference time can enhance AI reasoning capabilities. While this development is incremental rather than revolutionary, it underscores the growing importance of inference compute in AI impacts and governance.| Blog - Lennart Heim
Satellite imagery of data centers provides limited immediate value for AI-specific insights due to several challenges. Continuous monitoring and cross-correlation with other sources may improve insights, but AI-specific information will likely remain limited.| Blog - Lennart Heim
Regulators in the US and EU are using training compute thresholds to identify general-purpose AI models that may pose risks. But why are they using them if training compute is only a crude proxy of risk?| Blog - Lennart Heim
To verify and evaluate frontier AI systems, full access to internals might be needed, raising security concerns. A government-run Trusted AI Compute Cluster could provide a secure solution.| Blog - Lennart Heim
Compute providers can play an essential role in a regulatory ecosystem via four key capacities: as securers, safeguarding AI systems and critical infrastructure; as record keepers, enhancing visibility for policymakers; as verifiers of customer activities, ensuring oversight; and as enforcers, ...| Blog - Lennart Heim
To address security and safety risks stemming from highly capable artificial intelligence (AI) models, we propose that the US government should ensure compute providers implement Know-Your-Customer (KYC) schemes.| Blog - Lennart Heim
I think the current enthusiasm for hardware-enabled mechanisms in AI chips, often also described as on-chip mechanisms, should be tempered. It is critical to be clear about the specific benefits hardware-enabled mechanisms provide, ...| Blog - Lennart Heim
Compute plays a significant role in AI development and deployment. However, many arguments contest its importance, suggesting that the governance capacity that compute enables can change.| Blog - Lennart Heim
Governance of AI, much like its development and deployment, demands a deep technical understanding.| Blog - Lennart Heim
In this post, we try to estimate what fraction of all chips were high-end data center AI chips in 2022. When discussing compute governance measures for AI regulation, it is crucial to precisely define the scope of any such measures to prevent regulatory overreach and counterproductive side effects.| Blog - Lennart Heim
In this talk, I present the idea of using computational resources (short compute) as a node for AI governance. First, I will start by talking about recent events in compute and AI and how they relate to compute governance.| Blog - Lennart Heim
The current progress in increased compute spending, driven by the training of frontier AI systems, is not sustainable as it currently outpaces the reduction in compute price performance.| Blog - Lennart Heim
How much did Google's 530B parameter model PaLM's training cost? Something around $9M to $23M.| Blog - Lennart Heim