0:00 /0:03 Prompt: A male assassin dances fluidly atop a city rooftop at night, full dark robe attire blending modern tactical gear with elegant, flowing elements. Neon lights from the city skyline reflect off his outfit as he moves with precision and grace, his silhouette striking| ThinkDiffusion
Generating Qwen images with Controlnet unlocks a powerful way to guide your AI creations using visual structure, lines, and forms drawn or extracted from reference images. Want better control over your AI image generation? Here's how to use Qwen Image with InstantX Union ControlNet to guide your creations with poses,| ThinkDiffusion
Learn how to use Stable Diffusion's latest capabilities with Automatic1111, ComfyUI, Fooocus and more| ThinkDiffusion
0:00 /0:20 Ever found yourself wishing a portrait could actually speak, sharing stories with real movement and emotion? Now, that spark of imagination is within reach—no complicated setups required. With just a bit of creative input, you can watch your favorite images transform into| ThinkDiffusion
0:00 /0:06 1× 💡Credits to the awesome Benji for this workflow. Original Link - https://www.youtube.com/watch?v=b69Qs0wvaFE&t=311s Uni3C is a ComfyUI model by Alibaba that converts static images into dynamic videos by transferring camera movements from reference videos. This tutorial covers complete| ThinkDiffusion
0:00 /0:02 1× What This Workflow Does This ComfyUI workflow creates smooth animations by: * Taking your starting image * Generating an end frame with AI * Creating seamless transitions between both frames * Maintaining consistent subjects and backgrounds throughout 💡Credits to the awesome TheArtOfficial for this workflow. Original Link: https://www.| ThinkDiffusion
0:00 /0:12 1× Transform static portraits into realistic talking videos with perfect lip-sync using MultiTalk AI. No coding required. Difficulty: Beginner-friendly Setup Time: 15 minutes What You'll Create Turn any portrait - artwork, photos, or digital characters - into speaking, expressive videos that sync perfectly with audio input.| ThinkDiffusion
0:00 /0:31 1× Flux Kontext expands your images into panoramic views directly in ComfyUI. Instead of cropping or stretching, it intelligently generates new content that extends beyond your image borders, creating seamless panoramic scenes. What You'll Get This workflow takes a standard image and generates extended panoramic versions| ThinkDiffusion
Want to create custom AI image models but find the process intimidating? This guide shows you how to train your own LoRA models using FluxGym - no coding experience required. Whether you want to generate images in a specific art style, create consistent characters, or adapt AI models for your| ThinkDiffusion
Flux Kontext is an AI image editing model by Black Forest Labs that excels at targeted modifications. Instead of generating entirely new images, it edits existing ones based on your text instructions.| ThinkDiffusion
MAGREF lets you create videos from multiple reference images while keeping each person or object looking consistent throughout the video. This guide shows you how to set up and use MAGREF in ComfyUI to create videos with multiple subjects that maintain their original appearance.| ThinkDiffusion
0:00 /0:03 This guide covers ATI (Any Trajectory Instruction) - ByteDance's tool for controlling motion in AI-generated videos. You'll learn what it does, how to set it up in ComfyUI, and how to use it to create videos with precise movement control.| ThinkDiffusion
0:00 /0:03 Have you ever looked at a photo and imagined it moving—maybe even starring in its own short film? Now you can turn that daydream into reality, no animation degree required! Welcome to the world of Wan2.1 VACE, where the magic of| ThinkDiffusion
0:00 /0:09 1× Ever wished you could magically expand your videos to reveal what’s just out of frame - like adding more scenery, characters, or even special effects? This cutting-edge AI model lets you effortlessly extend the edges of your videos, filling in new, seamless content that| ThinkDiffusion
0:00 /0:05 1× Sometimes a single image can say more than a thousand words-but what if it could actually tell a story, move, and even express emotion? In a world where digital content is everywhere, the idea of breathing life into a still photo feels like something out| ThinkDiffusion
0:00 /0:03 1× Ever wish you could step behind the camera and change the angle of a scene—after you’ve already shot the video? That’s exactly the kind of movie magic ReCamMaster brings to the table. What is ReCamMaster AI? ReCamMaster is a cutting-edge AI framework| ThinkDiffusion
0:00 /0:05 1× The recently released Wan 2.1 is a groundbreaking open-source AI video model. Renowned for its ability to exceed the performance of other open-source models like Hunyuan and LTX, as well as numerous commercial alternatives, Wan 2.1 delivers truly incredible text2video and image2video generations| ThinkDiffusion
💡5/7/2025 Changelog - Updated the workflow, and tutorial procedures. It uses now the latest model LatentSync 1.5 Lip synced video from just an audio clip and a base video? We got you! LatentSync is an advanced lip sync framework that creates natural-looking speech by analyzing audio and| ThinkDiffusion
We now can use LoRAs together with the AI video model Hunyuan. Why? To keep character or object consistency in a video.| ThinkDiffusion
LTX speeds up the video-making process so you can focus on what really matters — telling your story and connecting with your audience. In this guide, we'll focus on Image2Video generation specifically, and we’ll explore all the features that make LTX a game-changer.| ThinkDiffusion
💡Update 03/14/2025: Uploaded a new version of workflow 0:00 /0:04 1× Hey there, video enthusiasts! It’s a thrill to see how quickly things are changing, especially in the way we create videos. Picture this: with just a few clicks, you can transform your existing clips| ThinkDiffusion
AnimateDiff, a custom node for Stable Diffusion within ComfyUI, enables the creation of coherent animations from text or video inputs.| ThinkDiffusion