Learn how to use Stable Diffusion's latest capabilities with Automatic1111, ComfyUI, Fooocus and more| ThinkDiffusion
Change backgrounds. Swap objects. Add stuff. Remove stuff. Adjust styles. All through simple text prompts instead of wrestling with complicated tools. Qwen is Alibaba's image editing model, built on their 20B-parameter foundation. It handles object manipulation, style transfers, and even text editing inside images. The results are surprisingly realistic, and| ThinkDiffusion
Generating Qwen images with Controlnet unlocks a powerful way to guide your AI creations using visual structure, lines, and forms drawn or extracted from reference images. Want better control over your AI image generation? Here's how to use Qwen Image with InstantX Union ControlNet to guide your creations with poses,| ThinkDiffusion
0:00 /0:06 1× 💡Credits to the awesome Benji for this workflow. Original Link - https://www.youtube.com/watch?v=b69Qs0wvaFE&t=311s Uni3C is a ComfyUI model by Alibaba that converts static images into dynamic videos by transferring camera movements from reference videos. This tutorial covers complete| ThinkDiffusion
0:00 /0:02 1× What This Workflow Does This ComfyUI workflow creates smooth animations by: * Taking your starting image * Generating an end frame with AI * Creating seamless transitions between both frames * Maintaining consistent subjects and backgrounds throughout 💡Credits to the awesome TheArtOfficial for this workflow. Original Link: https://www.| ThinkDiffusion
0:00 /0:12 1× Transform static portraits into realistic talking videos with perfect lip-sync using MultiTalk AI. No coding required. Difficulty: Beginner-friendly Setup Time: 15 minutes What You'll Create Turn any portrait - artwork, photos, or digital characters - into speaking, expressive videos that sync perfectly with audio input.| ThinkDiffusion
0:00 /0:05 1× The recently released Wan 2.1 is a groundbreaking open-source AI video model. Renowned for its ability to exceed the performance of other open-source models like Hunyuan and LTX, as well as numerous commercial alternatives, Wan 2.1 delivers truly incredible text2video and image2video generations| ThinkDiffusion
We now can use LoRAs together with the AI video model Hunyuan. Why? To keep character or object consistency in a video.| ThinkDiffusion
LTX speeds up the video-making process so you can focus on what really matters — telling your story and connecting with your audience. In this guide, we'll focus on Image2Video generation specifically, and we’ll explore all the features that make LTX a game-changer.| ThinkDiffusion
💡Update 03/14/2025: Uploaded a new version of workflow 0:00 /0:04 1× Hey there, video enthusiasts! It’s a thrill to see how quickly things are changing, especially in the way we create videos. Picture this: with just a few clicks, you can transform your existing clips| ThinkDiffusion
AnimateDiff, a custom node for Stable Diffusion within ComfyUI, enables the creation of coherent animations from text or video inputs.| ThinkDiffusion