WAN 2.2 First-Last Frame Video Generation in ComfyUI

August 11, 2025
ComfyUI
WAN 2.2 First-Last Frame Video Generation in ComfyUI
Learn how to use WAN 2.2 First-Last Frame Video Generation in ComfyUI. This step-by-step guide shows FP8 and GGUF workflows for fast, high-quality results.

1. Introduction

If you’ve ever wanted to create an animation that smoothly transitions from a precise starting image to a specific ending image, WAN 2.2’s new First–Last Frame to Video (FLF2V) feature is the perfect solution. Rather than relying on the AI to guess the style or motion, you provide both the first and last frames, and the model generates seamless, realistic motion between them.

In this tutorial, we’ll walk you through the FP8 setup for blazing-fast performance — whether running locally or on a cloud GPU service like RunPod — and also introduce the GGUF workflow variant designed for lower-VRAM systems, making powerful video generation accessible to more users.

2. System Requirements for WAN 2.2 First-Last Frame FP8 Workflow

Before diving into video generation, ensure your system meets the hardware and software requirements to run the WAN 2.2 First-Last Frame FP8 workflow smoothly. This setup still needs a solid GPU — we recommend at least an RTX 4090 (24GB VRAM) or using a cloud GPU service like RunPod.

Requirement 1: ComfyUI Installed & Updated

To get started, you need ComfyUI installed locally or via cloud. For local Windows setup, follow this guide:

👉 How to Install ComfyUI Locally on Windows

Once installed, make sure to update ComfyUI to the latest version by opening the Manager tab in the interface and clicking Update ComfyUI. Keeping it up to date ensures compatibility with the latest workflows and features.

If you don’t have a high-end GPU locally, consider running ComfyUI on RunPod with a network volume for persistent storage:

👉 How to Run ComfyUI on RunPod with Network Volume

Requirement 2: Download Wan2.2 FP8 Model Files

Download the following models and place them in the correct ComfyUI folders:

File NameHugging Face Download PageFile Directory
wan2.2_i2v_high_noise_14B_fp8_scaled.safetensors🤗 Download..\ComfyUI\models\diffusion_models
wan2.2_i2v_low_noise_14B_fp8_scaled.safetensors🤗 Download..\ComfyUI\models\diffusion_models
umt5_xxl_fp8_e4m3fn_scaled.safetensors🤗 Download..\ComfyUI\models\clip
wan_2.1_vae.safetensors🤗 Download..\ComfyUI\models\vae
Wan2.2-Lightning_I2V-A14B-4steps-lora_HIGH_fp16.safetensors🤗 Download..\ComfyUI\models\loras
Wan2.2-Lightning_I2V-A14B-4steps-lora_LOW_fp16.safetensors🤗 Download..\ComfyUI\models\loras

Requirement 3: Verify Folder Structure

Confirm that your folders and files look like this:

ts
1📁 ComfyUI/
2└── 📁 models/
3    ├── 📁 clip/
4    │   └── umt5_xxl_fp8_e4m3fn_scaled.safetensors
5    ├── 📁 diffusion_models/
6    │   ├── wan2.2_i2v_high_noise_14B_fp8_scaled.safetensors
7    │   └── wan2.2_i2v_low_noise_14B_fp8_scaled.safetensors
8    ├── 📁 vae/
9    │   └── wan_2.1_vae.safetensors
10    └── 📁 loras/
11        ├── Wan2.2-Lightning_I2V-A14B-4steps-lora_HIGH_fp16.safetensors
12        └── Wan2.2-Lightning_I2V-A14B-4steps-lora_LOW_fp16.safetensors

Once everything is installed and organized, you’re ready to download and load the Wan2.2 FLF2V FP8 workflow and start generating videos faster with the Wan2.2 Lightning LoRA’s. Thanks to these Lightning LoRA’s, we can complete the render in just 4 total steps—2 steps for each KSampler—while still maintaining high visual quality.

3. Download & Load the WAN 2.2 FLF2V Workflow

Now that your environment and model files are set, it’s time to load and configure the Wan2.2 First–Last Frame FP8 FLF2V workflow in ComfyUI. This setup ensures all components work together seamlessly for faster, high-quality video generation from your images. Once configured, you’ll be ready to create impressive animations with precise frame control.

Load the Wan2.2 First–Last Frame FP8 FLF2V workflow JSON file:


👉 Download the Wan2.2 FLF2V FP8 workflow JSON file and drag it into your ComfyUI canvas.

This workflow comes fully pre-arranged with all nodes and model references needed for smooth video generation.

Install Missing Nodes

If you see any red nodes in the workflow, it means some nodes are missing. To fix this, open the Manager tab in ComfyUI, then click Install Missing Custom Nodes, then restart ComfyUI. This should resolve the issues.

Once all nodes are installed and the workflow loads without errors, you’re ready for the fun part — uploading your first frame and last frame, then letting WAN 2.2 bring them to life with your very first First–Last Frame transition animation.

4. Running First–Last Frame Video Generation Workflow

With the workflow loaded and all components in place, it’s time to run the FP8 First–Last Frame generation.

Uploading Your First & Last Frames

Start by loading your images into the respective image loaders. The First Frame should be a highly detailed image that sets the style for your animation, while the Last Frame should represent the final scene, pose, or color tone you wish to achieve. In our case, we’re focusing on a transition from a mystery brand box to the actual branded item—perfect for eye-catching ad campaigns. Below are the First and Last Frames we’ve created using Flux Dev, setting the tone for a smooth, high-impact transformation.

Set Animation Prompt

In this step, we need to clearly describe the visual transition from the first frame to the last. The more vivid and specific your prompt, the better the AI will understand the motion, mood, and style you want to achieve. Think of it like directing a cinematic shot—mention the subject, movement, lighting, atmosphere, and any dramatic reveals.

For example, here’s the prompt we’re using for our mystery box–to–brand reveal:

The matte-black Nike cube begins to slowly rotate midair as the thin beams of white light intensify, slicing through the swirling dust with sharper clarity. The dust particles condense into shimmering motes that gather around the cube’s edges, creating an ethereal glowing outline. Suddenly, the cube pulses softly and begins to gently unfold—its sleek matte surfaces sliding apart like a high-tech puzzle, releasing a cascade of soft white fog that curls and drifts outward in slow motion. From within the mist, the matte-black Nike training shoe emerges, levitating gracefully as the fog forms a thin halo around it. Backlighting brightens, casting a glowing aura that highlights the shoe’s contours and the glowing swoosh. The chamber’s darkness deepens, amplifying the sacred, almost otherworldly energy as the shoe floats centered and still, ready to be admired.

When writing your own, break it into short, sequential actions that describe the transformation step-by-step. This gives the model a clear visual roadmap, resulting in smoother and more cohesive animations.

For this example, we’ll keep our default settings and run the generation right away. Here’s the result of our mystery box–to–brand reveal:

Look how awesome ☝️

Now, if you don’t have a powerful GPU and prefer not to use Runpod, don’t worry—we’ve got you covered. In the next section, we’ll walk through our First–Last Frame GGUF workflow, designed to run locally so you can still create smooth, cinematic transitions right on your own machine.

5. Bonus: GGUF First-Last Frame Video Generation Workflow

For users operating in low-VRAM environments, the GGUF variant of the WAN 2.2 model offers a viable alternative.

First–Last Frame GGUF Workflow

For running locally without a high-end GPU, the main difference is that instead of the FP8 high and low diffusion models, you’ll need High Noise and Low Noise GGUF models that fit your VRAM.

You can download them here:
Wan2.2 High & Low Noise GGUF Models on Hugging Face

That’s the only extra requirement for this workflow—everything else stays the same. You’ll still use:

  • Wan2.2 Lightning LoRA’s for fast generation

  • VAE for decoding

  • Clip/Text Encoder for prompt processing

Verify Folder Structure

ts
1📁 ComfyUI/
2└── 📁 models/
3    ├── 📁 clip/
4    │   └── umt5_xxl_fp8_e4m3fn_scaled.safetensors
5    ├── 📁 diffusion_models/
6    │   ├── Wan2.2-I2V-A14B-HighNoise-Q3_K_S.gguf # Or any other GGUF version
7    │   └── Wan2.2-I2V-A14B-LowNoise-Q3_K_S.gguf # Or any other GGUF version
8    ├── 📁 vae/
9    │   └── wan_2.1_vae.safetensors
10    └── 📁 loras/
11        ├── Wan2.2-Lightning_I2V-A14B-4steps-lora_HIGH_fp16.safetensors
12        └── Wan2.2-Lightning_I2V-A14B-4steps-lora_LOW_fp16.safetensors

Once these GGUF models are in place, you’re ready to run the First–Last Frame workflow entirely on your local machine, no cloud service needed.

Load the Wan2.2 First-Last Frame GGUF Workflow

Start by loading the Wan2.2 First-Last Frame GGUF Workflow:

👉 Download the provided Wan2.2 FLF2V GGUF workflow JSON file and drag it into your ComfyUI canvas.

With the GGUF workflow set up, you’re all ready to start generating your own animations locally!

Next, we’ll show some more amazing examples to inspire your creativity and give you ideas for your own projects—so grab your prompts and let’s bring your transitions to life!

6. More Wan2.2 First-Last Frame Examples

Below, you can see some more stunning examples of what’s possible with the First–Last Frame workflow.

7. Conclusion

Congratulations! You’ve now explored the full capabilities of WAN 2.2’s First–Last Frame Video Generation in ComfyUI—from the blazing-fast FP8 workflow to the VRAM-friendly GGUF variant. You’ve seen how to set up your system, load the workflow, craft detailed animation prompts, and generate smooth, cinematic transitions between your first and last frames.

With WAN 2.2, creating high-quality, eye-catching animations is no longer limited to high-end GPUs or complex software. Whether you’re producing ad campaigns, creative visuals, or experimental animations, the FP8 and GGUF workflows give you the flexibility to generate stunning videos efficiently and with precise control.

Now it’s your turn: experiment with your own frames, refine your prompts, and explore the creative possibilities. With the tools and workflows covered here, you’re fully equipped to bring your visions to life—frame by frame.

Frequently Asked Questions

AI Video Generation

Create Amazing AI Videos

Generate stunning videos with our powerful AI video generation tool.

Get Started Now
OR