The nation’s leading AI labs treat security as an afterthought. Currently, they’re basically handing the key secrets for AGI to the CCP on a silver platter. Securing the AGI secrets and weights against the state-actor threat will be an immense effort, and we’re not on track. They met in the evening in Wigner’s office. “Szilard| SITUATIONAL AWARENESS
AI progress won’t stop at human-level. Hundreds of millions of AGIs could automate AI research, compressing a decade of algorithmic progress (5+ OOMs) into ≤1 year. We would rapidly go from human-level to vastly superhuman AI systems. The power—and the peril—of superintelligence would be dramatic. Let an ultraintelligent machine be defined as a machine that| SITUATIONAL AWARENESS
Watch now | Chatted with John Schulman (cofounded OpenAI and led ChatGPT creation) on: How post-training tames the shoggoth & enabled GPT-4o AI coworkers in 1-2 years The plan if AGI comes in 2025 Reasoning traces, long horizon RL, & multimodal agents Plateaus & moats| www.dwarkeshpatel.com
Lightweight models in two variants, optimized for when speed and efficiency matter most, with a context window of up to one million tokens.| Google DeepMind
Today, we’re introducing Meta Llama 3, the next generation of our state-of-the-art open source large language model. In the coming months, we expect to...| ai.meta.com
Create user-facing experiences, new products, and new ways to work with the most advanced AI models on the market.| www.anthropic.com
Training large language models (LLMs) costs less than you think. Using the MosaicML platform, we show how fast, cheap, and easy it is to train these models at scale (1B -> 70B parameters). With new training recipes and infrastructure designed for large workloads, we enable you to train LLMs while maintaining total customizability over your model and dataset.| Databricks
Last August, my research group created a forecasting contest [https://bounded-regret.ghost.io/ai-forecasting/] to predict AI progress on four benchmarks. Forecasts were asked to predict state-of-the-art performance (SOTA) on each benchmark for June 30th 2022, 2023, 2024, and 2025. It’s now past June 30th, so we can evaluate| Bounded Regret