System Card: Claude Opus 4 & Claude Sonnet 4.| Simon Willison’s Weblog
12 posts tagged ‘ai-energy-usage’. How much energy is used by AI systems?| Simon Willison’s Weblog
If you are a user of LLM systems that use tools (you can call them “AI agents” if you like) it is critically important that you understand the risk of …| Simon Willison’s Weblog
I presented a three hour workshop at PyCon US yesterday titled Building software on top of Large Language Models. The goal of the workshop was to give participants everything they …| Simon Willison’s Weblog
Direct link to a PDF on Anthropic's CDN because they don't appear to have a landing page anywhere for this document. Anthropic's system cards are always worth a look, and …| Simon Willison’s Weblog
In the two and a half years that we’ve been talking about prompt injection attacks I’ve seen alarmingly little progress towards a robust solution. The new paper Defeating Prompt Injections …| Simon Willison’s Weblog
When a dangerous model is deployed, it will pose misalignment and misuse risks. Even before dangerous models exist, deploying models on dangerous paths can accelerate and diffuse progress toward dangerous models.| ailabwatch.org
In this episode, I'm joined by Doro Hinrichs and Kira Clark from Scott Logic and Peter Gostev, Head of AI at Moonpig. Together, we explore whether we can ever really trust and secure Generative AI – and what impact this will have on product and service design – before offering pragmatic advice on what organisations can do to navigate this terrain.| Scott Logic
I keep seeing people use the term “prompt injection” when they’re actually talking about “jailbreaking”. This mistake is so common now that I’m not sure it’s possible to correct course: …| Simon Willison’s Weblog
2023 was the breakthrough year for Large Language Models (LLMs). I think it’s OK to call these AI—they’re the latest and (currently) most interesting development in the academic field of …| Simon Willison’s Weblog
I participated in a webinar this morning about prompt injection, organized by LangChain and hosted by Harrison Chase, with Willem Pienaar, Kojin Oshiba (Robust Intelligence), and Jonathan Cohen and Christopher …| simonwillison.net
Large language models have swept the world with a fervor last seen when the internet itself first began to pervade everyday life. And generative AI itself is positioned to create an entirely new category of applications... How we, as developers (and the people working with developers) model our systems has _never_ been more important.| www.jamessimone.net
In this post, Phillip talks through the challenges & pitfalls of LLMs we faced when building our Query Assistant - and that you too may face.| Honeycomb