Leading AI supercomputers are becoming ever more energy-intensive, using more power-hungry chips in greater numbers. In January 2019, Summit at Oak Ridge National Lab had the highest power capacity of any AI supercomputer at 13 MW. Today, xAI’s Colossus supercomputer uses 280 MW, over 20x as much. Colossus relies on mobile generators because the local grid has insufficient power capacity for so much hardware. In the future, we may see frontier models trained across geographically distribute...| Epoch AI
The computational performance of the leading AI supercomputers has grown by 2.5x annually since 2019. This has enabled vastly more powerful training runs: if 2020’s GPT-3 were trained on xAI’s Colossus, the original two week training run could be completed in under 2 hours. This growth was enabled by two factors: the number of chips deployed per cluster has increased by 1.6x per year, and performance per chip has also improved by 1.6x annually.| Epoch AI
The private sector’s share of global AI computing capacity has grown from 40% in 2019 to 80% in 2025. Though many leading early supercomputers such as Summit were run by government and academic labs, the total installed computing power of public-sector supercomputers has only increased at 1.8x per year, rapidly outpaced by private-sector supercomputers, whose total computing power has grown at 2.7x per year. The rising economic importance of AI has spurred the private sector to build more a...| Epoch AI
As of May 2025, the United States contains about three-quarters of global AI supercomputer performance, with China in second place with 15%. Meanwhile, traditional high-performance computing leaders like Germany, Japan, and France now play marginal roles in the AI supercomputing landscape. This shift largely reflects the increased dominance of major technology companies, which are predominantly based in the United States.| Epoch AI
AI supercomputers have become increasingly expensive. Since 2019, the cost of the computing hardware for leading supercomputers has increased at a rate of 1.9x per year. In June 2022, the most expensive cluster was Oak Ridge National Laboratory Frontier, with a reported cost of $200M. Three years later, as of June 2025, the most expensive supercomputer is xAI’s Colossus, estimated to use over $7B of hardware.| Epoch AI
Executive Summary| arxiv.org