We believe that LLMs will now be able to automate large swathes of knowledge work that AI previously couldn’t touch.| Foundation Capital
Much has been said about many companies’ desire for more compute (as well as data) to train larger foundation models...| Why We Need More Compute for Inference
Today, we are excited to introduce DBRX, an open, general-purpose LLM created by Databricks. Across a range of standard benchmarks, DBRX sets a new state-of-the-art for established open LLMs. Moreover, it provides the open community and enterprises building their own LLMs with capabilities that were previously limited to closed model APIs; according to our measurements, it surpasses GPT-3.5, and it is competitive with Gemini 1.0 Pro. It is an especially capable code model, surpassing speciali...| Databricks
In 2024, enterprise leaders are doubling down on their genAI investments. 16 developments for founders to keep in mind to capture this new opportunity.| Andreessen Horowitz
Microsoft Chief Scientific Officer Eric Horvitz explains how new prompting strategies can enable generalist large language models like GPT-4 to achieve exceptional expertise in specific domains, such as medicine, and outperform fine-tuned specialist models.| Microsoft Research
ChatGPT is costly because it requires massive amounts of computing power to generate answers to queries. Microsoft is making a chip to change that.| Business Insider