When Reid Hoffman coined “blitzscaling,” the playbook was simple: hire fast, burn capital, and seize the market before rivals could react. The term “botscaling” describes the same hunger for speed but with a different resource: artificial intelligence rather than human headcount. In a botscaled venture, persistent AI co-founders, specialist agents, and multi-model workflows shoulder mostContinue reading "Botscaling"| Gradient Flow
While autonomous agents and large-scale reasoning models are currently attracting significant attention and investment, I find that Retrieval-Augmented Generation (RAG) and its variants remain foundational to building practical, knowledge-intensive AI applications. The RAG space isn’t static; it’s continually evolving, offering compelling solutions for real-world AI challenges. Take GraphRAG, for instance—a design pattern that garnered attentionContinue reading "RAG’s Next Chapter: Ag...| Gradient Flow
Recent releases from Google (Gemini 2.0 Flash), OpenAI (o1 & o3), and particularly DeepSeek (V3 & R1) have underscored the rapid pace of innovation in the foundation model space. This prompted me to compile a list of developments I’m closely monitoring, especially in the realm of reasoning-enhanced models—a topic I’ve explored in depth recently, includingContinue reading "Foundation Models: What’s Next for 2025 and Beyond"| Gradient Flow
The ability of a machine to reason—not merely regurgitate information but to engage in structured, logical, multi-step problem-solving—is swiftly emerging as a key trait of the most advanced large language models (LLM). We are transitioning from models that simply mimic patterns to those that can genuinely think, deconstructing complex challenges into a series of interpretableContinue reading "Beyond Imitation: How Reinforcement Learning is Reshaping AI Reasoning"| Gradient Flow
DeepSeek’s approach to AI underscores that high-performance large language models do not have to be prohibitively expensive or proprietary. By combining open-source development with resource-optimized techniques like Mixture-of-Experts architectures, FP8 mixed-precision training, and Multi-Token Prediction (MTP), DeepSeek-V3 demonstrates a robust and efficient path forward for teams of varying sizes. Having followed coverage of DeepSeek-V3 acrossContinue reading "DeepSeek: What You Need t...| Gradient Flow
Software systems may start simple, but adding sophisticated features and ensuring maintainability leads to complexities, contributing to the age-old ‘build versus buy’ dilemma in software acquisition. With every team needing to weigh the pros and cons of developing new technology in-house versus procuring it from third parties, factors such as cost, implementation timeline, and technicalContinue reading "Entity Resolution: Insights and Implications for AI Applications"| Gradient Flow
[Last updated: 2025-05] Ben Lorica serves as co-chair for several leading industry conferences: the AI Conference, the AI Agent Conference, the Applied AI Summit, while also serving as the Strategic Content Chair for AI at the Linux Foundation. You can follow him on Linkedin, Mastodon, Reddit, Bluesky, YouTube, or TikTok. He is a member of theContinue reading| Gradient Flow