Learn what large language models are and how LLMs offer significant benefits across industries, from business to healthcare to the legal industry.| The Microsoft Cloud Blog
Detect offensive or inappropriate content in text and images with AI content moderation and filtering capabilities from Azure AI Content Safety.| azure.microsoft.com
Large Language Models (LLMs) have risen significantly in popularity and are increasingly being adopted across multiple applications. These LLMs are heavily aligned to resist engaging in illegal or unethical topics as a means to avoid contributing to responsible AI harms. However, a recent line of attacks, known as jailbreaks, seek to overcome this alignment. Intuitively, jailbreak attacks aim to narrow the gap between what the model can do and what it is willing to do. In this paper, we intro...| arXiv.org
Learn more on how Prompt Shields, Groundedness detection, and other responsible AI tools in Azure help prevent, evaluate, and monitor AI risks and attacks.| Microsoft Azure Blog
Read about Microsoft's new open automation framework, PyRIT, to empower security professionals and machine learning engineers to proactively find risks in their generative AI systems.| Microsoft Security Blog
Microsoft and OpenAI research on emerging AI threats focusing on threat actors Forest Blizzard, Emerald Sleet, Crimson Sandstorm.| Microsoft Security Blog
Today, Microsoft is announcing its support for new voluntary commitments crafted by the Biden-Harris administration to help ensure that advanced AI systems are safe, secure, and trustworthy. By endorsing all of the voluntary commitments presented by President Biden and independently committing to several others that support these critical goals, Microsoft is expanding its safe and...| Microsoft On the Issues