A library that allows developers to quickly search for embeddings of multimedia documents that are similar to each other.| ai.meta.com
Video Joint Embedding Predictive Architecture 2 (V-JEPA 2) is the first world model trained on video that achieves state-of-the-art visual understanding...| ai.meta.com
Large language models (LLMs) introduce new security risks, but there are few comprehensive evaluation suites to measure and reduce these risks. We...| ai.meta.com
We’ve taken responsible steps before launching Meta AI and Meta Llama 3 so people can have safer and more enjoyable experiences.| ai.meta.com
We release Code Llama, a family of large language models for code based on Llama 2 providing state-of-the-art performance among open models, infilling...| ai.meta.com
This blog is a part of our 5 Steps to Getting Started series, where we go over 5 steps you need to take to get started to use an open source project by Meta.| ai.meta.com
We’re outlining our progress in developing the touch-sensing ecosystem -- hardware, simulators, libraries, benchmarks, and data sets — necessary for...| ai.meta.com
Situated and Interactive Multimodal Conversations (SIMMC) is a first-of-its-kind dataset that has been open-sourced to help researchers and engineers...| ai.meta.com
We're announcing several new research milestones that push the limits of embodied AI, including the first audio-visual platform, a new framework for...| ai.meta.com
Facebook AI is introducing M2M-100, the first multilingual machine translation model that can translate between any pair of 100 languages without relying...| ai.meta.com
We are releasing pretrained HuBERT speech representation models and code for recognition and generation. By alternating clustering and prediction steps,...| ai.meta.com
Harmful content can evolve rapidly, so it’s crucial for AI systems to adapt quickly, too. We’ve built and deployed a new AI technology called Few-Shot...| ai.meta.com
We are releasing Detection Transformers (DETR), an important new approach to object detection and panoptic segmentation. It’s the first object detection...| ai.meta.com
The researchers behind Meta AI’s open source Detectron computer vision library discuss how the project came about, how the OS community has contributed,...| ai.meta.com
Meta AI is sharing new works showing how we can create a conversational model for on-device voice assistants that overcomes the latency burden of seq2seq...| ai.meta.com
We’re working to improve our AI systems that identify and analyze hate speech in comments. Millions of pieces of content were detected in 2020 and...| ai.meta.com
AudioCraft is a single-stop code base for all your generative audio needs: music, sound effects, and compression after training on raw audio signals. .| ai.meta.com
Modern artificial intelligence (AI) systems are powered by foundation models. This paper presents a new set of foundation models, called Llama 3. It is a...| ai.meta.com
We’re releasing the Video Joint Embedding Predictive Architecture (V-JEPA) model, a crucial step in advancing machine intelligence with a more grounded...| ai.meta.com
Here’s a look at what we announced at LlamaCon and how you can get started with our newest releases.| ai.meta.com
We’re introducing Llama 4 Scout and Llama 4 Maverick, the first open-weight natively multimodal models with unprecedented context support and our first...| ai.meta.com
Serving the billions of people who use Facebook’s products and technologies means continually evolving our AI frameworks. Today, we’re announcing that...| ai.meta.com
Meta AI is sharing OPT-175B, the first 175-billion-parameter language model to be made available to the broader AI research community.| ai.meta.com
Today, we’re excited to premiere Meta Movie Gen, our breakthrough generative AI research for media, which includes modalities like image, video, and audio.| ai.meta.com
I-JEPA learns by creating an internal model of the outside world, which compares abstract representations of images (rather than comparing the pixels...| ai.meta.com
We’re introducing the availability of Llama 2, the next generation of our open source large language model.| ai.meta.com
In this work, we develop and release Llama 2, a collection of pretrained and fine-tuned large language models (LLMs) ranging in scale from 7 billion to...| ai.meta.com
Meta Movie Gen is our latest research breakthrough that allows you to use simple text inputs to create videos and sounds, edit existing videos or...| ai.meta.com
Today, we’re releasing Llama 3.2, which includes small and medium-sized vision LLMs, and lightweight, text-only models that fit onto edge and mobile devices.| ai.meta.com
Llama models are approaching 350 million downloads to date, and they were downloaded more than 20 million times in the last month alone, making Llama the...| ai.meta.com
SAM 2 is a segmentation model that enables fast, precise selection of any object in any video or image.| ai.meta.com
By sharing our research and dataset, we hope to further accelerate research into segmentation and more general image and video understanding.| ai.meta.com
Today, we’re publicly releasing SAM 2, the first-ever unified model for segmenting anything in videos and images.| ai.meta.com
Bringing open intelligence to all, our latest models expand context length, add support across eight languages, and include Meta Llama 3.1 405B— the...| ai.meta.com
Open the Meta AI app and start talking to get tailored answers, advice, and inspiration—or express yourself with fun new ways to edit your videos.| ai.meta.com
Today, we’re introducing Meta Llama 3, the next generation of our state-of-the-art open source large language model. In the coming months, we expect to...| ai.meta.com
Introducing the AI Research SuperCluster — Meta’s cutting-edge AI supercomputer for AI research| ai.meta.com
Invisible watermarking incorporates information into digital content. The watermark is invisible to the naked eye but can be detected by algorithms—even...| ai.meta.com
The latest updates on the Llama ecosystem coming out of Meta Connect 2023| ai.meta.com
AI Alliance Launches as an International Community of Leading Technology Developers, Researchers, and Adopters Collaborating Together to Advance Open,...| ai.meta.com
Meta AI announces Purple Llama, a project for open trust and safety in generative AI, with tools for cybersecurity and input/output filtering.| ai.meta.com
Make-A-Video builds on Meta AI’s recent research in generative technology and has the potential to open new opportunities for creators and artists.t| ai.meta.com
Code Llama, which is built on top of Llama 2, is free for research and commercial use.| ai.meta.com
At Meta, our Responsible AI efforts are propelled by our mission to help ensure that AI at Meta benefits people and society. Learn more about how we...| ai.meta.com
Our work aims to break down language barriers across the world for everyone to understand and communicate with anyone—no matter what language they speak.| ai.meta.com
Today, we’re releasing our LLaMA (Large Language Model Meta AI) foundational model with a gated release. LLaMA is more efficient and competitive with...| ai.meta.com
In 2020, we initiated the Meta Training and Inference Accelerator (MTIA) family of chips to support our evolving AI workloads, starting with an inference...| ai.meta.com
Sending an MP3 typically requires 128 kb/s of bandwidth. We can compress HiFi audio down to 12 kb/s, without sacrificing the quality.| ai.meta.com
How can we build machines with human-level intelligence? There’s a limit to how far the field of AI can go with supervised learning alone. Here's why...| ai.meta.com
Llama 2 Version Release Date: July 18, 2023| ai.meta.com