We’re making steps to secure ML models: adding security scans to Hugging Face, elevating HF as a partner on huntr, & serving Insights DB to the community.| protectai.com
In this blog post, we’ll break down the opportunities and security challenges surrounding multi-model LLMs today and in the future.| protectai.com
In this blog, we’re taking a closer look at three of the top GenAI security risks: prompt injections, supply chain vulnerabilities, and improper output handling.| protectai.com
Security assessment of Meta's Llama 4 Scout and Maverick models shows medium risk (52-58%) with notable jailbreak vulnerabilities.| protectai.com
Hugging Face and Protect AI partnered in October 2024 to enhance machine learning (ML) model security through Guardian’s scanning technology| protectai.com
Second in a five-part series on implementing Secure by Design principles in AI system development| protectai.com
Functioning as a "one-to-many" abstraction layer, MCP accelerates the development of dynamic LLM-powered tools by establishing a standardized interface.| protectai.com
AI and ML technologies are revolutionizing industries, automating decisions, and optimizing workflows, and introducing novel security risks.| protectai.com
We are actively future-proofing LLM security with eBPF with Layer, providing unparalleled visibility and security for your LLM applications.| protectai.com