ComfyUI Desktop makes it easier than ever to run ComfyUI locally without messing with the command line. It’s the most beginner-friendly way to start creating AI images and videos locally. No Python setup, Git installation, and configuration headaches. If you just want to get started quickly, ComfyUI Desktop is the clear winner. The installation takes… Continue reading ComfyUI Desktop: Installation & review <p>The post ComfyUI Desktop: Installation & review first appeared on Stable Diffusi...| Stable Diffusion Art
Long-time member Heinz Zysset kindly shares this high-resolution text-to-video workflow. This workflow uses: Software needed Wan 2.2 upscale workflow Step 1: Download models Download the Wan 2.2 AIO model wan2.2-t2v-rapid-aio.safetensors. Put it in ComfyUI > models > checkpoints. Download the 4x-Ultrasharp model. Put it in ComfyUI > models > upscale_models folder. Step 2: Download the workflow Download the… Continue reading Wan 2.2 AIO Upscale workflow <p>The post Wan 2.2 AIO Upscale wor...| Stable Diffusion Art
Qwen Image is a new open-source text-to-image model developed by Alibaba’s Qwen team. It’s quickly gaining thumbs-up from AI creators. Unlike many closed| Stable Diffusion Art
Generating Qwen images with Controlnet unlocks a powerful way to guide your AI creations using visual structure, lines, and forms drawn or extracted from reference images. Want better control over your AI image generation? Here's how to use Qwen Image with InstantX Union ControlNet to guide your creations with poses,| ThinkDiffusion
Wan 2.1 Video is a series of open foundational video models. It supports a wide range of video-generation tasks. It can turn images or text descriptions into| Stable Diffusion Art
ComfyUI is a node-based Stable Diffusion GUI. This step-by-step guide covers installing ComfyUI on Windows and Mac.| Stable Diffusion Art
ComfyUI is a popular, open-source user interface for Stable Diffusion, Flux, and other AI image and video generators. It makes it easy for users to| Stable Diffusion Art
ComfyUI is a popular way to run local Stable Diffusion and Flux AI image models. It is a great complement to AUTOMATIC1111 and Forge. Some workflows may| Stable Diffusion Art
You can use a reference image to direct AI image generation using ControlNet. Below is an example of copying the pose of the girl on the left to generate a| Stable Diffusion Art
This updated guide helps RunDiffusion users fix red nodes, missing models, broken paths, and deprecated nodes in the new ComfyUI layout. Learn to use Comfy Manager and restart sessions properly.| RunDiffusion
0:00 /0:20 Ever found yourself wishing a portrait could actually speak, sharing stories with real movement and emotion? Now, that spark of imagination is within reach—no complicated setups required. With just a bit of creative input, you can watch your favorite images transform into| ThinkDiffusion
0:00 /0:06 1× 💡Credits to the awesome Benji for this workflow. Original Link - https://www.youtube.com/watch?v=b69Qs0wvaFE&t=311s Uni3C is a ComfyUI model by Alibaba that converts static images into dynamic videos by transferring camera movements from reference videos. This tutorial covers complete| ThinkDiffusion
This Wan 2.2 image-to-video workflow lets you fix the first and last frames and generates a video connecting the two (FLF2V). See the example below.| Stable Diffusion Art
0:00 /0:02 1× What This Workflow Does This ComfyUI workflow creates smooth animations by: * Taking your starting image * Generating an end frame with AI * Creating seamless transitions between both frames * Maintaining consistent subjects and backgrounds throughout 💡Credits to the awesome TheArtOfficial for this workflow. Original Link: https://www.| ThinkDiffusion
Wan 2.2 is one of the best local video models. Generating high-quality videos is what it is known for. But if you set the video frame to 1, you get an image!| Stable Diffusion Art
0:00 /0:12 1× Transform static portraits into realistic talking videos with perfect lip-sync using MultiTalk AI. No coding required. Difficulty: Beginner-friendly Setup Time: 15 minutes What You'll Create Turn any portrait - artwork, photos, or digital characters - into speaking, expressive videos that sync perfectly with audio input.| ThinkDiffusion
Wan 2.2 is a high-quality video AI model you can run locally on your computer. In this tutorial, I will cover: Software needed ComfyUI Colab Notebook If you use my ComfyUI Colab notebook, you don’t need to download the model as instructed below. Select the Wan_2_2 model before running the notebook. Text-to-video with the Wan… Continue reading Video from text with Wan 2.2 local model <p>The post Video from text with Wan 2.2 local model first appeared on Stable Diffusion Art.</p>| Stable Diffusion Art
Wan 2.2 is a local video model that can turn text or images into videos. In this article, I will focus on the popular image-to-video function. The new Wan 2.2| Stable Diffusion Art
0:00 /0:31 1× Flux Kontext expands your images into panoramic views directly in ComfyUI. Instead of cropping or stretching, it intelligently generates new content that extends beyond your image borders, creating seamless panoramic scenes. What You'll Get This workflow takes a standard image and generates extended panoramic versions| ThinkDiffusion
Do you have a photo you want to turn it into an unique animation style? Long-time member Heinz Zysset kindly shares his stylization workflow to our site| Stable Diffusion Art
Flux Kontext is an AI image editing model by Black Forest Labs that excels at targeted modifications. Instead of generating entirely new images, it edits existing ones based on your text instructions.| ThinkDiffusion
MAGREF lets you create videos from multiple reference images while keeping each person or object looking consistent throughout the video. This guide shows you how to set up and use MAGREF in ComfyUI to create videos with multiple subjects that maintain their original appearance.| ThinkDiffusion
ComfyUI is known for running local image and video AI models. Recently, it added support for running proprietary close models through API. As of writing, you can use popular models from Kling, Google Veo, OpenAI, RunwayML, and Pika, among others. In this article, I will show you how to set up and use ComfyUI API… Continue reading How to use ComfyUI API nodes <p>The post How to use ComfyUI API nodes first appeared on Stable Diffusion Art.</p>| Stable Diffusion Art
0:00 /0:03 This guide covers ATI (Any Trajectory Instruction) - ByteDance's tool for controlling motion in AI-generated videos. You'll learn what it does, how to set it up in ComfyUI, and how to use it to create videos with precise movement control.| ThinkDiffusion
0:00 /0:03 Have you ever looked at a photo and imagined it moving—maybe even starring in its own short film? Now you can turn that daydream into reality, no animation degree required! Welcome to the world of Wan2.1 VACE, where the magic of| ThinkDiffusion
0:00 /0:09 1× Ever wished you could magically expand your videos to reveal what’s just out of frame - like adding more scenery, characters, or even special effects? This cutting-edge AI model lets you effortlessly extend the edges of your videos, filling in new, seamless content that| ThinkDiffusion
WAN 2.1 VACE (Video All-in-One Creation and Editing) is a video generation and editing AI model that you can run locally on your computer. It unifies| Stable Diffusion Art
0:00 /0:05 1× Sometimes a single image can say more than a thousand words-but what if it could actually tell a story, move, and even express emotion? In a world where digital content is everywhere, the idea of breathing life into a still photo feels like something out| ThinkDiffusion
WAN 2.1 VACE (Video All-in-One Creation and Editing) is a video generation and editing model developed by the Alibaba team. It unifies text-to-video,| Stable Diffusion Art
This workflow generates four video clips and combines them into a single video. To improve the quality and control of each clip, the initial frame is| Stable Diffusion Art
This workflow generates beautiful videos of mechanical insects from text prompts. You can run it locally or using a ComfyUI service. It uses Flux AI to| Stable Diffusion Art
0:00 /0:03 1× Ever wish you could step behind the camera and change the angle of a scene—after you’ve already shot the video? That’s exactly the kind of movie magic ReCamMaster brings to the table. What is ReCamMaster AI? ReCamMaster is a cutting-edge AI framework| ThinkDiffusion
Wan 2.1 Video is a state-of-the-art AI model that you can use locally on your PC. However, it does take some time to generate a high-quality 720p video, and| Stable Diffusion Art
0:00 /0:05 1× The recently released Wan 2.1 is a groundbreaking open-source AI video model. Renowned for its ability to exceed the performance of other open-source models like Hunyuan and LTX, as well as numerous commercial alternatives, Wan 2.1 delivers truly incredible text2video and image2video generations| ThinkDiffusion
💡5/7/2025 Changelog - Updated the workflow, and tutorial procedures. It uses now the latest model LatentSync 1.5 Lip synced video from just an audio clip and a base video? We got you! LatentSync is an advanced lip sync framework that creates natural-looking speech by analyzing audio and| ThinkDiffusion
We now can use LoRAs together with the AI video model Hunyuan. Why? To keep character or object consistency in a video.| ThinkDiffusion
LTX speeds up the video-making process so you can focus on what really matters — telling your story and connecting with your audience. In this guide, we'll focus on Image2Video generation specifically, and we’ll explore all the features that make LTX a game-changer.| ThinkDiffusion
A development team migrated from Next.js to pure React after hitting slow build times, failures with the recent […]| DEVCLASS
💡Update 03/14/2025: Uploaded a new version of workflow 0:00 /0:04 1× Hey there, video enthusiasts! It’s a thrill to see how quickly things are changing, especially in the way we create videos. Picture this: with just a few clicks, you can transform your existing clips| ThinkDiffusion
Hunyuan Video is a new local and open-source video model with exceptional quality. It can generate a short video clip with a text prompt alone in a few| Stable Diffusion Art