Deep dive into the data-in-use protection mechanisms of secure enclaves| Mithril Security Blog
Cheating AI test methods include by swapping models or overfitting on known test sets. A solution to AI test cheat would be a secure infrastructure for verifiable tests that protects the confidentiality of model weights and test data.| Mithril Security Blog
A year ago, Mithril Security exposed the risk of AI model manipulation with PoisonGPT. To solve this, we developed AICert, a cryptographic tool that creates tamper-proof model cards, ensuring transparency and detecting unauthorized changes in AI models.| Mithril Security Blog
A recent discovery reveals a weakness in older Intel CPUs affecting SGX security. Despite the alarm, the extracted keys are encrypted and unusable. Dive in to learn more.| Mithril Security Blog
In this article, we provide you with a few hints on how to choose your stack to build a confidential AI workload leveraging GPUs. This protection is meant to safeguard data privacy and model weights confidentiality.| Mithril Security Blog
This article provides insights into Mithril Security's journey to make AI more trustworthy and their perspective on addressing privacy concerns in the world of AI, along with their vision for the future.| Mithril Security Blog
Discover BlindChat, an open-source privacy-focused conversational AI that runs in your web browser, safeguarding your data while offering a seamless AI experience. Explore how it empowers users to enjoy both privacy and convenience in this transformative AI solution.| Mithril Security Blog
Here, we provide a deep dive into Confidential Computing, how it can protect data privacy, and where it comes from?| Mithril Security Blog
An introduction to remote attestation, which is the key to trust a remote enclave.| Mithril Security Blog
An introduction to Confidential Computing and the problems it solves| Mithril Security Blog
Mithril Security's latest update outlines advancements in confidential AI deployment, emphasizing innovations in data privacy, model integrity, and governance for enhanced security and transparency in AI technologies.| Mithril Security Blog
Apple has announced Private Cloud Compute (PCC), which uses Confidential Computing to ensure user data privacy in cloud AI processing, setting a new standard in data security.| Mithril Security Blog
Mithril Security has been awarded a grant from the OpenAI Cybersecurity Grant Program. This grant will fund our work on developing open-source tooling to deploy AI models on GPUs with Trusted Platform Modules (TPMs) while ensuring data confidentiality and providing full code integrity.| Mithril Security Blog
The article unveils AIGovTool, a collaboration between the Future of Life Institute and Mithril, employing Intel SGX enclaves for secure AI deployment. It addresses concerns of misuse by enforcing governance policies, ensuring protected model weights, and controlled consumption.| Mithril Security Blog
Introducing BlindChat, a confidential AI assistant prioritizing user privacy through secure enclaves. Learn how it addresses data security concerns in AI applications.| Mithril Security Blog
This article explores privacy risks in using large language models (LLMs) for AI applications. It focuses on the dangers of data exposure to third-party providers during fine-tuning and the potential disclosure of private information through LLM responses.| Mithril Security Blog
Explore our privacy-first AI roadmap with projects BlindLlama and BlindChat—ensuring in-browser, confidential solutions for user data protection.| Mithril Security Blog
Introducing BlindLlama: An open-source Zero-trust AI API. Learn how BlindLlama ensures confidentiality and transparency in AI deployment.| Mithril Security Blog
We will show in this article how one can surgically modify an open-source model, GPT-J-6B, and upload it to Hugging Face to make it spread misinformation while being undetected by standard benchmarks.| Mithril Security Blog
We take security and open-source data privacy seriously at Mithril Security. So we're very proud that our historical confidential computing solution, BlindAI, was successfully audited by Quarkslab!| Mithril Security Blog
With BlindBox, you can use Large Language Models without any intermediary or model owner seeing the data sent to the models. This type of solution is critical today, as the newfound ease-of-use of generative AI (GPT4, MidJourney, GitHub Copilot…) is already revolutionizing the tech industry.| Mithril Security Blog