When a dangerous model is deployed, it will pose misalignment and misuse risks. Even before dangerous models exist, deploying models on dangerous paths can accelerate and diffuse progress toward dangerous models.| ailabwatch.org
We’re launching the AI Cyber Defense Initiative to help transform cybersecurity and use AI to reverse the dynamic known as the “Defender’s Dilemma”| Google
Today Google released released the Secure AI Framework to help collaboratively secure AI technology.| Google
Posted Kim Lewandowski, Google Open Source Security Team & Mark Lodato, Binary Authorization for Borg Team Supply chain integrity attacks—u...| Google Online Security Blog
An update prepared for the UK AI Safety Summit Introduction Microsoft welcomes the opportunity to share information about how we are advancing responsible artificial intelligence (AI), including by implementing voluntary commitments that we and others made at the White House convening in July.[1] Visibility into our policies and how we put them into practice helps...| Microsoft On the Issues
Aims to educate developers, designers, architects, managers, and organizations about the potential security risks when deploying and managing Large Language Models (LLMs)| owasp.org
NIST has finalized SP 800-218A, Secure Software Development Practices for Generative AI and Dual-Use Foundation Models: An SSDF Community Profile. This publication augments SP 800-218 by adding practices, tasks, recommendations, considerations, notes,...| csrc.nist.gov