Hugging Face AI is a platform and community dedicated to machine learning and data science| Hugging Face
As of July 2025, the Hugging Face platform has rolled out exciting updates that empower developers, researchers, and businesses| Hugging Face
JanusFlow is an advanced framework designed to unify image understanding and generation within a single model. It introduces a streamlined architecture that combines autoregressive language models with rectified flow—a cutting-edge technique in generative modeling. Primary discovery shows that rectified flow can be effectively trained within the large language model framework, simplifying the process by removing […] The post JanusFlow appeared first on Hugging Face.| Hugging Face
Smol TTS models are here! OuteTTS-0.1-350M – Zero shot voice cloning, built on LLaMa architecture, CC-BY license! 🔥 Three-step approach to TTS: The model is extremely impressive for 350M parameters! Kudos to the @OuteAI team on such a brilliant feat – I’d love to see this be applied on larger data and smarter backbones like […]| Hugging Face
Janus 1.3B is making waves as a cutting-edge, multi-modal language model (LM) that excels in a wide range of tasks.| Hugging Face
Wikimedia Enterprise has released an early beta dataset on Hugging Face, allowing the public to use it freely and provide feedback for future improvements. This dataset is sourced from the Snapshot API, which delivers bulk database dumps, or “snapshots,” of Wikimedia projects. In this release, the dataset includes English and French Wikipedia articles. It’s built […] The post Wikipedia Dataset appeared first on Hugging Face.| Hugging Face
The Vits-ar-sa-huba model is a cutting-edge text-to-speech (TTS) system tailored for the Saudi dialect.| Hugging Face
Phi-3.5-MoE is a cutting-edge, lightweight open model developed from the Phi-3 datasets, which include synthetic data and curated publicly available documents, emphasizing high-quality and reasoning-intensive information. It supports multiple languages and features a 128K token context length. The model has undergone extensive refinement through supervised fine-tuning, proximal policy optimization, and direct preference optimization to guarantee […] The post Phi-3.5 on HuggingFace appeared ...| Hugging Face
Today, we're excited to introduce Qwen2-Math, a series of math-focused LLM within Qwen2 series, including Instruct-1.5B/7B/72B.| Hugging Face
We are excited to announce that XetHub, a Seattle-based company, has been acquired by Hugging Face.| Hugging Face
Launch of Hugging Face Inference-as-a-Service powered by NVIDIA NIM, a new service on the Hugging Face Hub.| Hugging Face
Large Language Models (LLMs) trained for causal language modeling are versatile and can handle a broad spectrum of tasks. However, they often falter with simpler tasks such as logic, calculation, and search. When these models are used in areas where they are less effective, the results may not meet expectations. To mitigate these limitations, the […]| Hugging Face
Phi-3 family comprises 4 models, each fine-tuned for specific instructions and developed according to Microsoft's standards for responsible AI| Hugging Face