Our mission The Frontier Model Forum (FMF) is an industry-supported non-profit dedicated to advancing frontier AI safety and security. The FMF has three core mandates: The FMF focuses primarily on managing significant risks to public safety and security, including from chemical, biological, radiological, nuclear (CBRN) and advanced cyber threats. By drawing on the technical and operational […]| Frontier Model Forum
The Frontier Model Forum (FMF) has a founding mandate to advance the science of frontier AI safety and security. As part of that effort, today we are pleased to share an update on our support for novel research at the intersection of AI and biological sciences. Virology and Bacterial Biothreat Benchmarks Through its work with […]| Frontier Model Forum
As AI systems advance in capability, they have the potential to accelerate scientific discovery and drive economic growth. Yet alongside those benefits they also pose a distinct challenge: Highly capable frontier AI systems may introduce or elevate large-scale risks to public safety and national security, including those related to advanced cyber and chemical, biological, radiological, […]| Frontier Model Forum
The Frontier Model Forum (FMF) is proud to announce that all of its member firms have signed a first-of-its-kind agreement designed to facilitate information-sharing about threats, vulnerabilities, and capability advances unique to frontier AI. Information-sharing has always been central to the FMF’s mission and purpose. At its launch in July 2023, the Forum was given […]| Frontier Model Forum
Frontier AI models and systems are particularly promising for advancing medicine and public health. At the same time, their knowledge of biology and ability to reason about biological concepts may also be misused in ways that pose significant risks to public safety and security. To manage those risks, many frontier AI developers have published safety […] The post Issue Brief: Preliminary Reporting Tiers for AI-Bio Safety Evaluations appeared first on Frontier Model Forum.| Frontier Model Forum
The Frontier Model Forum submitted the below response on March 14, 2025 to the Request for Information on the Development of an Artificial Intelligence (AI) Action Plan. We are grateful for the opportunity to respond to the request for information from the Office of Science and Technology Policy (OSTP) on the “Development of an AI […]| Frontier Model Forum
Although frontier AI holds enormous promise for society, advanced AI systems may also pose significant risks to national security and public safety. Frontier AI safety frameworks have recently emerged as a method for frontier AI developers to demonstrate how they manage those risks effectively. By establishing processes for how to identify, evaluate, and mitigate severe […]| Frontier Model Forum
Frontier AI-bio safety evaluations aim to test the biological capabilities and, by extension, the potential biosafety implications of frontier AI. As the science of AI safety evaluations is still nascent, the evaluations themselves can vary widely in both purpose and methodology. As such, a key first step in building out an effective safety evaluation ecosystem […] The post Issue Brief: Preliminary Taxonomy of AI-Bio Safety Evaluations appeared first on Frontier Model Forum.| Frontier Model Forum
As frontier AI systems continue to advance, rigorous and scientifically grounded safety evaluations will be increasingly essential. Although frontier AI holds immense promise for society, the growing capabilities of advanced AI systems may also introduce risks to public safety and security. Ensuring such systems benefit society without compromising safety will depend on the development of […]| Frontier Model Forum
Artificial intelligence has long been a cornerstone of cybersecurity operations. From malware detection to network traffic analysis, predictive machine learning models and other narrow AI applications have been used in cybersecurity for decades.1 Yet recent advances in general-purpose AI, along with advances in predictive modeling, have ushered in a new generation of defensive applications and […] The post Issue Brief: AI for Cyber Defense appeared first on Frontier Model Forum.| Frontier Model Forum
Safety frameworks have recently emerged as an important tool for frontier AI safety. By specifying capability and/or risk thresholds, safety evaluations and mitigation strategies for frontier AI models in advance of their development, safety frameworks position frontier AI developers to be able to address potential safety challenges in a principled and coherent way. Both government […] The post Issue Brief: Components of Frontier AI Safety Frameworks appeared first on Frontier Model Forum.| Frontier Model Forum
How the AI Safety Fund will advance the field of frontier model safety research At the FMF, we believe that fostering safe AI development and deployment requires cultivating a vibrant research community. That’s why we’re supporting technical research to improve AI safety and enable independent, standardized evaluations of frontier AI capabilities and risks. Read more […]| Frontier Model Forum
The Frontier Model Forum is proud to be a founding member of the new U.S. Artificial Intelligence Safety Institute Consortium (AISIC) announced today by the National Institute of Standards and Technology (NIST). Established as part of NIST’s response to the Biden Administration’s Executive Order on Safe, Secure, and Trustworthy Development and Use of AI, the […]| Frontier Model Forum
Anthropic, Google, Microsoft & OpenAI announce Executive Director of the Frontier Model Forum & over $10 million for a new AI Safety Fund Today, Anthropic, Google, Microsoft, and OpenAI are announcing the selection of Chris Meserole as the first Executive Director of the Frontier Model Forum, and the creation of a new AI Safety Fund, a […]| Frontier Model Forum