Transform Your AI Experience with Ollama’s Game-Changing Desktop Application The wait is over! Ollama has officially launched its Ollama 0.1.0 desktop application for both macOS and Windows, marking a significant milestone in making local AI accessible to everyone. This groundbreaking release transforms how users interact with large language models, moving beyond command-line interfaces to deliver […]| Collabnix
Deep dive into the data-in-use protection mechanisms of secure enclaves| Mithril Security Blog
A recent discovery reveals a weakness in older Intel CPUs affecting SGX security. Despite the alarm, the extracted keys are encrypted and unusable. Dive in to learn more.| Mithril Security Blog
In this article, we provide you with a few hints on how to choose your stack to build a confidential AI workload leveraging GPUs. This protection is meant to safeguard data privacy and model weights confidentiality.| Mithril Security Blog
This article provides insights into Mithril Security's journey to make AI more trustworthy and their perspective on addressing privacy concerns in the world of AI, along with their vision for the future.| Mithril Security Blog
Here, we provide a deep dive into Confidential Computing, how it can protect data privacy, and where it comes from?| Mithril Security Blog
An introduction to remote attestation, which is the key to trust a remote enclave.| Mithril Security Blog
An introduction to Confidential Computing and the problems it solves| Mithril Security Blog
Mithril Security's latest update outlines advancements in confidential AI deployment, emphasizing innovations in data privacy, model integrity, and governance for enhanced security and transparency in AI technologies.| Mithril Security Blog
Mithril Security has been awarded a grant from the OpenAI Cybersecurity Grant Program. This grant will fund our work on developing open-source tooling to deploy AI models on GPUs with Trusted Platform Modules (TPMs) while ensuring data confidentiality and providing full code integrity.| Mithril Security Blog
The article unveils AIGovTool, a collaboration between the Future of Life Institute and Mithril, employing Intel SGX enclaves for secure AI deployment. It addresses concerns of misuse by enforcing governance policies, ensuring protected model weights, and controlled consumption.| Mithril Security Blog
Introducing BlindChat, a confidential AI assistant prioritizing user privacy through secure enclaves. Learn how it addresses data security concerns in AI applications.| Mithril Security Blog
This article explores privacy risks in using large language models (LLMs) for AI applications. It focuses on the dangers of data exposure to third-party providers during fine-tuning and the potential disclosure of private information through LLM responses.| Mithril Security Blog