Learn how to use Jupyter Agent on Hugging Face to automate Jupyter Notebook creation and explore datasets for Retrieval-Augmented Generation (RAG). Step-by-step guide with real-world examples.| AI Agents That Work Blog
Comprehensive comparison of Hugging Face and Ollama for local AI deployment. Learn setup, performance, use cases, and which platform suits your AI development needs.| Collabnix
In this article, we use a pretrained I-JEPA model for image similarity. We specifically use the ViT-H I-JEPA trained with 14x14 patches. The post JEPA Series Part 2: Image Similarity with I-JEPA appeared first on DebuggerCafe.| DebuggerCafe
Introduction: What is Hugging Face and Why It’s Revolutionizing AI Hugging Face has emerged as the definitive platform for machine learning and artificial intelligence development, often dubbed “the GitHub of machine learning.” If you’re working with AI in 2025, understanding Hugging Face isn’t just beneficial—it’s essential. This comprehensive guide will walk you through everything you […]| Collabnix
Learn how to install, configure, and deploy OpenAI's GPT OSS models (20B & 120B parameters) with this comprehensive step-by-step tutorial covering local inference, API access, and optimization techniques.| Collabnix
In this article, we build a simple video summarizer application using Qwen2.5-Omni 3B model with the UI powered by Gradio. The post Video Summarizer Using Qwen2.5-Omni appeared first on DebuggerCafe.| DebuggerCafe
Fine-tuning SmolLM2-135M Instruct model on the WMT14 French-to-English subset for machine translation using a small language model.| DebuggerCafe
As of July 2025, the Hugging Face platform has rolled out exciting updates that empower developers, researchers, and businesses| Hugging Face
Follow us on Bluesky, Twitter (X), Mastodon and Facebook at @Hackread| Hackread - Latest Cybersecurity, Hacking News, Tech, AI & Crypto
Qwen2.5-Omni is a multimodal generative AI model capable of accepting text, image, audio, and video as input while outputting text and audio.| DebuggerCafe
JanusFlow is an advanced framework designed to unify image understanding and generation within a single model. It introduces a streamlined architecture that combines autoregressive language models with rectified flow—a cutting-edge technique in generative modeling. Primary discovery shows that rectified flow can be effectively trained within the large language model framework, simplifying the process by removing […] The post JanusFlow appeared first on Hugging Face.| Hugging Face
Smol TTS models are here! OuteTTS-0.1-350M – Zero shot voice cloning, built on LLaMa architecture, CC-BY license! 🔥 Three-step approach to TTS: The model is extremely impressive for 350M parameters! Kudos to the @OuteAI team on such a brilliant feat – I’d love to see this be applied on larger data and smarter backbones like […]| Hugging Face
Wikimedia Enterprise has released an early beta dataset on Hugging Face, allowing the public to use it freely and provide feedback for future improvements. This dataset is sourced from the Snapshot API, which delivers bulk database dumps, or “snapshots,” of Wikimedia projects. In this release, the dataset includes English and French Wikipedia articles. It’s built […] The post Wikipedia Dataset appeared first on Hugging Face.| Hugging Face
Today, we're excited to introduce Qwen2-Math, a series of math-focused LLM within Qwen2 series, including Instruct-1.5B/7B/72B.| Hugging Face
We are excited to announce that XetHub, a Seattle-based company, has been acquired by Hugging Face.| Hugging Face
Launch of Hugging Face Inference-as-a-Service powered by NVIDIA NIM, a new service on the Hugging Face Hub.| Hugging Face
Fine-tuning the Phi 1.5 model on the BBC News Summary dataset for Text Summarization using Hugging Face Transformers.| DebuggerCafe
Large Language Models (LLMs) trained for causal language modeling are versatile and can handle a broad spectrum of tasks. However, they often falter with simpler tasks such as logic, calculation, and search. When these models are used in areas where they are less effective, the results may not meet expectations. To mitigate these limitations, the […]| Hugging Face
Phi-3 family comprises 4 models, each fine-tuned for specific instructions and developed according to Microsoft's standards for responsible AI| Hugging Face
Instruction following Jupyter Notebook interface with a QLoRA fine-tuned Phi 1.5 model and the Hugging Face Transformers library.| DebuggerCafe
Fine tuning Phi 1.5 using QLoRA on the Stanford Alpaca instruction tuning dataset with the Hugging Face Transformers library.| DebuggerCafe
When Mozilla’s Innovation group first launched the llamafile project late last year, we were thrilled by the immediate positive response from open source AI developers. It’s become one of Mozilla’s top three most-favorited repositories on GitHub, attracting a number of contributors, some excellent PRs, and a growing community on our Discord server. The post Llamafile’s progress, four months in appeared first on Mozilla Hacks - the Web developer blog.| Mozilla Hacks – the Web developer blog
Dans cette article, je te présente les options, leurs avantages et leurs inconvénients, pour utiliser l’IA en entreprise. Utiliser l’Intelligence Artificielle en 2024 est devenu un enjeu majeur pour les entreprises souhaitant rester compétitives. Aujourd’hui, il existe 3 options principales pour tirer parti de cette technologie: Néanmoins, il peut-être difficile de déterminer quelle option correspondra […] L’article Comment utiliser l’IA en entreprise ? – Guide 2024 est ap...| Inside Machine Learning
We have released a new version of Colour - Checker Detection that implements a new machine learning inference method to detect colour rendition charts, specifically the ColorChecker Classic 24 from X-| Colour Science
In this article, you will learn how to use Habana® Gaudi®2 to accelerate model training and inference, and train bigger models with 🤗 Optimum Habana.| Intel Gaudi Developers
Fine tuning GPT2 with Hugging Face and Habana Gaudi. In this tutorial, we will demonstrate fine tuning a GPT2 model on Habana Gaudi AI processors using Hugging Face optimum-habana library with DeepSpeed.| Habana Developers
We have optimized additional Large Language Models on Hugging Face using the Optimum Habana library.| Intel Gaudi Developers