How to Use WAN 2.2 PainterI2V in ComfyUI to Enhance Video Motion

November 24, 2025
ComfyUI
How to Use WAN 2.2 PainterI2V in ComfyUI to Enhance Video Motion
Transform static images into high-motion videos in ComfyUI with WAN 2.2 PainterI2V using 4-step LightX2V LoRAs for smooth animation and minimal render time.

1. Introduction

With WAN 2.2 PainterI2V (Image2Video) in ComfyUI, you can transform a static image into a high-motion video. Using the 4-step LightX2V LoRAs, the workflow brings visuals to life with flowing movement and reduces rendering time. Importantly, the PainterI2V node is what enables this increased motion β€” without it, previous methods produced only minimal movement. By the end of this tutorial, you’ll be able to increase motion in your clips with minimal render times.

2. System Requirements for WAN 2.2 PainterI2V FP8 Workflow

Before starting, make sure your system meets the hardware and software requirements to run the WAN 2.2 PainterI2V (Image2Video) workflow in ComfyUI. This workflow benefits from strong GPU performance for faster processing, though it can also run on cloud GPUs like RunPod.

Requirement 1: ComfyUI Installed

To begin, you’ll need ComfyUI installed either locally or on a cloud service. If you're installing locally on Windows, follow this guide:

πŸ‘‰ How to Install ComfyUI Locally on Windows

If you don’t have a powerful GPU available, consider running ComfyUI on RunPod using a persistent network volume. For a step-by-step walkthrough, you can check out the dedicated article and accompanying YouTube video linked below.

πŸ‘‰ How to Run ComfyUI on RunPod with Network Volume

Requirement 2: Download Models for WAN 2.2 PainterI2V

Download the necessary model files and place them in the correct directories within ComfyUI. The table below lists the required files, including the LightX2V LoRAs that enhance motion and reduce render times.

File NameHugging Face Download PageFile DirectoryNotes
wan2.2_i2v_high_noise_14B_fp8_scaled.safetensorsπŸ€— Download..\ComfyUI\models\diffusion_modelsRequired
wan2.2_i2v_low_noise_14B_fp8_scaled.safetensorsπŸ€— Download..\ComfyUI\models\diffusion_modelsRequired
Wan2.2_i2v_A14b_high_noise_lora_rank64_lightx2v_4step_1022.safetensorsπŸ€— Download..\ComfyUI\models\lorasOptional (enhances motion, reduces render time)
Wan2.2_i2v_A14b_low_noise_lora_rank64_lightx2v_4step_1022.safetensorsπŸ€— Download..\ComfyUI\models\lorasOptional (enhances motion, reduces render time)
Wan2_1_VAE_bf16.safetensorsπŸ€— Download..\ComfyUI\models\vaeRequired
umt5_xxl_fp8_e4m3fn_scaled.safetensorsπŸ€— Download..\ComfyUI\models\text_encodersRequired

Verify Folder Structure

Confirm that your folders and files look like this:

ts
1πŸ“ ComfyUI/
2└── πŸ“ models/
3  β”œβ”€β”€ πŸ“ diffusion_models/
4  │  β”œβ”€β”€ wan2.2_i2v_high_noise_14B_fp8_scaled.safetensors
5  │  └── wan2.2_i2v_low_noise_14B_fp8_scaled.safetensors
6  β”œβ”€β”€ πŸ“ loras/
7  │  β”œβ”€β”€ Wan2.2_i2v_A14b_high_noise_lora_rank64_lightx2v_4step_1022.safetensors
8  │  └── Wan2.2_i2v_A14b_low_noise_lora_rank64_lightx2v_4step_1022.safetensors
9  β”œβ”€β”€ πŸ“ vae/
10  │  └── Wan2_1_VAE_bf16.safetensors
11  └── πŸ“ text_encoders/
12   └── umt5_xxl_fp8_e4m3fn_scaled.safetensors

Once all files are properly placed and organized, you’re ready to load the WAN 2.2 PainterI2V workflow in ComfyUI and begin generating high-motion videos. Optional LightX2V LoRAs can be used to enhance motion and reduce render times. First, however, we need to download the workflow. Let’s move on to the next section!

3. Download & Load the WAN 2.2 PainterI2V Workflow

Now that your environment and model files are ready, it’s time to load and configure the WAN 2.2 PainterI2V FP8 + LightX2V workflow in ComfyUI. This setup ensures all components β€” diffusion models, LightX2V LoRAs, VAE, and text encoder β€” work together smoothly for faster, high-motion image-to-video generation. Once configured, you’ll be ready to start creating dynamic videos from your images.

Load the WAN 2.2 PainterI2V FP8 workflow JSON file:

πŸ‘‰ Download the WAN 2.2 PainterI2V FP8 workflow JSON file and drag it into your ComfyUI canvas.

This workflow includes all nodes and model references pre-arranged for smooth, motion-enhanced video generation. Next, we’ll dive into configuring the workflow settings for optimal results.

4. Running the PainterI2V Image2Video Workflow

Now that you have successfully loaded the WAN 2.2 PainterI2V workflow, it’s time to run the Image2Video generation process. This step involves uploading your reference image, setting your text prompt to guide motion, and configuring the PainterI2V node for optimal high-motion output.

Uploading Your Reference Image

Begin by loading your static reference image into the Load Image node. Choose a highly detailed image, as intricate features help the AI generate richer motion and maintain visual fidelity. Here, we start with a 16:9 aspect ratio.

Set Text Prompt

Next, configure a text prompt in the Text Encode node. Focus on describing the motion you want β€” including fast or dynamic camera movement, flowing objects, or environmental effects. A detailed prompt ensures the AI emphasizes motion in the generated video.

Example prompt:
"Chest-and-up, dark-skinned ebony elf woman with flowing hair and sensual curves, bouncing and swaying energetically. Shoulders, arms, and chest move rapidly, accentuating high-motion energy, hair whipping wildly. Camera rapidly alternates between her chest and face, capturing fast, fluid sequences of motion as she grabs and squeezes her chest. Sparkle dust and ambient effects swirl dynamically around her. Motion-emphasized, high-speed, fluid, and sensual, ending with her blowing a kiss directly to the camera."

Configure the PainterI2V Node

After your prompt is set, go to the PainterI2V node and adjust the following key parameters:

  • motion_amplitude: Start at 1.15 (range: 1–1.5). Higher values increase motion, but very high motion on longer clips can cause slight color shifts. The Color Match node helps reduce these shifts, though it may not fully correct them.

  • Width & Height: For a 16:9 image, set 832 Γ— 480.

  • num_frames: Set to 65 for a 4-second clip.

Verify Remaining Nodes

The rest of the workflow nodes can remain unchanged:

  • High & Low Noise KSamplers: Already set to 4 steps each (using the LightX2V LoRAs).

  • RIFE Node: Handles frame interpolation for smooth motion.

  • Color Match & FPS Node: Ensure consistent lighting and frame rate across all frames.

  • Video Output: Produces the final rendered clip.

Run the Generation

Once all nodes are configured, run the workflow. The WAN 2.2 PainterI2V node will generate a high-motion, visually smooth video. For the output below we bumped the resolution to 720p (1280 width x 720 height).

This render took around 270 seconds on an RTX 4090 (24GB VRAM). For faster or larger renders, consider using a cloud GPU provider like RunPod.

⚑ Tip: The more you emphasize motion in your text prompt and the higher the motion_amplitude, the more dynamic your final video will appear. Start with moderate values (1.05-1.20) to avoid color shifts, then adjust as needed.

5. BONUS: Comparing With and Without the PainterI2V Node

Curious to see the exact difference PainterI2V makes in your motion output? This bonus workflow lets you quickly compare results side by side β€” the top branch uses the PainterI2V node for high-motion enhancement, while the bottom branch runs the standard WanVideo Encode for a baseline.

Load the Comparison Workflow

πŸ‘‰ Download the WAN 2.2 PainterI2V Comparison workflow JSON file and drag it into your ComfyUI canvas.

The workflow is pre-arranged with all necessary nodes and model references. The top branch processes your video through PainterI2V to emphasize dynamic motion, while the bottom branch bypasses it using the standard WanVideo Encode. Both branches use the same diffusion models, LightX2V LoRAs, VAE, text encoder and seed ensuring a fair comparison.

Upload your reference image and configure your text prompt as usual. Once run, you’ll have two videos for immediate comparison, making it easy to evaluate how much more dynamic your clips become with PainterI2V.

6. Conclusion

In this tutorial, you learned how to use WAN 2.2 PainterI2V in ComfyUI to transform static images into high-motion, dynamic videos. By leveraging the 4-step LightX2V LoRAs and adjusting the PainterI2V node’s motion_amplitude, you can create smooth, energetic animations that bring your visuals to life.

The workflow allows you to emphasize motion in hair, clothing, and body dynamics, producing visually engaging clips with minimal setup. With careful prompt crafting and parameter adjustments, you can generate cinematic-quality videos that are ideal for social media, creative projects, or motion testing.

Now that your workflow is configured and running, experiment with different motion settings, poses, and dynamic prompts to explore the full potential of PainterI2V and achieve the exact motion style you want.

By mastering these techniques, you can easily enhance motion in your clips while keeping rendering efficient and maintaining high visual fidelity.

Frequently Asked Questions

Custom LoRA Training for Flux Dev Model

Train Custom Character LoRAs for Flux Dev

Automatically generate a dataset, create captions, and train LoRAs from a single image.

Start Training Now
OR