Two years ago it was unclear if AI security and reliability was anyone’s problem. Now, it’s starting to look like everyone’s problem. After all, it turns out that owning a product that might spew heinous slurs and leak all your proprietary data when it’s feeling cheeky might be a bad thing. It would appear that…| Scale Venture Partners
I really want an AI assistant: a Large Language Model powered chatbot that can answer questions and perform actions for me based on access to my private data and tools. …| Simon Willison’s Weblog
2023 was the breakthrough year for Large Language Models (LLMs). I think it’s OK to call these AI—they’re the latest and (currently) most interesting development in the academic field of …| Simon Willison’s Weblog
Generative AI has been the biggest technology story of 2023. Almost everybody’s played with ChatGPT, Stable Diffusion, GitHub Copilot, or Midjourney. A few have even tried out Bard or Claude, or run LLaMA1 on their laptop. And everyone has opinions about how these language models and art generation programs are going to change the nature of work, usher in the singularity, or perhaps even doom the human race. In enterprises, we’ve seen everything from wholesale adoption to policies that se...| O’Reilly Media
Prompt injection is a potential vulnerability in many LLM-based applications. An injection allows the attacker to hijack the underlying language model (such as GPT-3.5) and instruct it to do potentially evil things with the user’s data. For an overview of what can possibly go wrong, check out this recent post by Simon Willison. In particular, Simon writes: To date, I have not yet seen a robust defense against this vulnerability which is guaranteed to work 100% of the time.| artmatsak.com
I gave a talk on Sunday at North Bay Python where I attempted to summarize the last few years of development in the space of LLMs—Large Language Models, the technology …| simonwillison.net