Meditron, a suite of open-source large multimodal foundation models tailored to the medical field, was built on Meta Llama 2.| ai.meta.com
Students can ask their AI-enabled study buddy questions on WhatsApp and Messenger and receive conversational replies that help them with their schoolwork.| ai.meta.com
Diffusion models are a powerful generative framework, but come with expensive inference. Existing acceleration methods often compromise image quality or...| ai.meta.com
Modern artificial intelligence (AI) systems are powered by foundation models. This paper presents a new set of foundation models, called Llama 3. It is a...| ai.meta.com
Create immersive videos, discover our latest AI technology and see how we bring personal superintelligence to everyone.| ai.meta.com
We are sharing details of our next generation chip in our Meta Training and Inference Accelerator (MTIA) family. MTIA is a long-term bet to provide the...| ai.meta.com
A library that allows developers to quickly search for embeddings of multimedia documents that are similar to each other.| ai.meta.com
Video Joint Embedding Predictive Architecture 2 (V-JEPA 2) is the first world model trained on video that achieves state-of-the-art visual understanding...| ai.meta.com
We release Code Llama, a family of large language models for code based on Llama 2 providing state-of-the-art performance among open models, infilling...| ai.meta.com
This blog is a part of our 5 Steps to Getting Started series, where we go over 5 steps you need to take to get started to use an open source project by Meta.| ai.meta.com
We’re releasing the Video Joint Embedding Predictive Architecture (V-JEPA) model, a crucial step in advancing machine intelligence with a more grounded...| ai.meta.com
Here’s a look at what we announced at LlamaCon and how you can get started with our newest releases.| ai.meta.com
We’re introducing Llama 4 Scout and Llama 4 Maverick, the first open-weight natively multimodal models with unprecedented context support and our first...| ai.meta.com
Serving the billions of people who use Facebook’s products and technologies means continually evolving our AI frameworks. Today, we’re announcing that...| ai.meta.com
Meta AI is sharing OPT-175B, the first 175-billion-parameter language model to be made available to the broader AI research community.| ai.meta.com
Today, we’re excited to premiere Meta Movie Gen, our breakthrough generative AI research for media, which includes modalities like image, video, and audio.| ai.meta.com
I-JEPA learns by creating an internal model of the outside world, which compares abstract representations of images (rather than comparing the pixels...| ai.meta.com
We’re introducing the availability of Llama 2, the next generation of our open source large language model.| ai.meta.com
Meta Movie Gen is our latest research breakthrough that allows you to use simple text inputs to create videos and sounds, edit existing videos or...| ai.meta.com
Today, we’re releasing Llama 3.2, which includes small and medium-sized vision LLMs, and lightweight, text-only models that fit onto edge and mobile devices.| ai.meta.com
Llama models are approaching 350 million downloads to date, and they were downloaded more than 20 million times in the last month alone, making Llama the...| ai.meta.com
Bringing open intelligence to all, our latest models expand context length, add support across eight languages, and include Meta Llama 3.1 405B— the...| ai.meta.com
Create, remix and share videos and images with industry-leading AI models while exploring new ideas with our all-in-one, streamlined creation flow.| ai.meta.com
Today, we’re introducing Meta Llama 3, the next generation of our state-of-the-art open source large language model. In the coming months, we expect to...| ai.meta.com
Introducing the AI Research SuperCluster — Meta’s cutting-edge AI supercomputer for AI research| ai.meta.com
Invisible watermarking incorporates information into digital content. The watermark is invisible to the naked eye but can be detected by algorithms—even...| ai.meta.com
The latest updates on the Llama ecosystem coming out of Meta Connect 2023| ai.meta.com
AI Alliance Launches as an International Community of Leading Technology Developers, Researchers, and Adopters Collaborating Together to Advance Open,...| ai.meta.com
Meta AI announces Purple Llama, a project for open trust and safety in generative AI, with tools for cybersecurity and input/output filtering.| ai.meta.com
Make-A-Video builds on Meta AI’s recent research in generative technology and has the potential to open new opportunities for creators and artists.t| ai.meta.com
Code Llama, which is built on top of Llama 2, is free for research and commercial use.| ai.meta.com
At Meta, our Responsible AI efforts are propelled by our mission to help ensure that AI at Meta benefits people and society. Learn more about how we...| ai.meta.com
Our work aims to break down language barriers across the world for everyone to understand and communicate with anyone—no matter what language they speak.| ai.meta.com
Today, we’re releasing our LLaMA (Large Language Model Meta AI) foundational model with a gated release. LLaMA is more efficient and competitive with...| ai.meta.com
In 2020, we initiated the Meta Training and Inference Accelerator (MTIA) family of chips to support our evolving AI workloads, starting with an inference...| ai.meta.com
Sending an MP3 typically requires 128 kb/s of bandwidth. We can compress HiFi audio down to 12 kb/s, without sacrificing the quality.| ai.meta.com
How can we build machines with human-level intelligence? There’s a limit to how far the field of AI can go with supervised learning alone. Here's why...| ai.meta.com
Llama 2 Version Release Date: July 18, 2023| ai.meta.com